Dec 06 05:28:08 crc systemd[1]: Starting Kubernetes Kubelet... Dec 06 05:28:08 crc restorecon[4678]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:08 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 06 05:28:09 crc restorecon[4678]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 06 05:28:09 crc kubenswrapper[4958]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.572289 4958 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575223 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575242 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575246 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575250 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575254 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575260 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575264 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575269 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575273 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575278 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575284 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575290 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575296 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575301 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575306 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575310 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575315 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575321 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575326 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575330 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575333 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575337 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575341 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575345 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575348 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575352 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575356 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575359 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575363 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575366 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575370 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575373 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575377 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575381 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575384 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575388 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575392 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575395 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575405 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575409 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575412 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575417 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575422 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575426 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575429 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575434 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575438 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575442 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575445 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575449 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575452 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575457 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575460 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575464 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575483 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575487 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575491 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575494 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575498 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575501 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575505 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575508 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575512 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575516 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575519 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575523 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575527 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575530 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575534 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575538 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.575541 4958 feature_gate.go:330] unrecognized feature gate: Example Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575756 4958 flags.go:64] FLAG: --address="0.0.0.0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575767 4958 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575776 4958 flags.go:64] FLAG: --anonymous-auth="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575782 4958 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575787 4958 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575792 4958 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575798 4958 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575804 4958 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575809 4958 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575813 4958 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575818 4958 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575822 4958 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575827 4958 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575831 4958 flags.go:64] FLAG: --cgroup-root="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575835 4958 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575839 4958 flags.go:64] FLAG: --client-ca-file="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575843 4958 flags.go:64] FLAG: --cloud-config="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575847 4958 flags.go:64] FLAG: --cloud-provider="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575852 4958 flags.go:64] FLAG: --cluster-dns="[]" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575858 4958 flags.go:64] FLAG: --cluster-domain="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575862 4958 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575866 4958 flags.go:64] FLAG: --config-dir="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575871 4958 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575876 4958 flags.go:64] FLAG: --container-log-max-files="5" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575882 4958 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575887 4958 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575891 4958 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575896 4958 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575901 4958 flags.go:64] FLAG: --contention-profiling="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575905 4958 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575909 4958 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575913 4958 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575918 4958 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575923 4958 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575927 4958 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575932 4958 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575936 4958 flags.go:64] FLAG: --enable-load-reader="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575940 4958 flags.go:64] FLAG: --enable-server="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575945 4958 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575951 4958 flags.go:64] FLAG: --event-burst="100" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575956 4958 flags.go:64] FLAG: --event-qps="50" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575960 4958 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575965 4958 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575969 4958 flags.go:64] FLAG: --eviction-hard="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575974 4958 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575978 4958 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575982 4958 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575987 4958 flags.go:64] FLAG: --eviction-soft="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575991 4958 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575995 4958 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.575999 4958 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576003 4958 flags.go:64] FLAG: --experimental-mounter-path="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576008 4958 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576012 4958 flags.go:64] FLAG: --fail-swap-on="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576017 4958 flags.go:64] FLAG: --feature-gates="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576022 4958 flags.go:64] FLAG: --file-check-frequency="20s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576028 4958 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576032 4958 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576037 4958 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576041 4958 flags.go:64] FLAG: --healthz-port="10248" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576046 4958 flags.go:64] FLAG: --help="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576050 4958 flags.go:64] FLAG: --hostname-override="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576054 4958 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576059 4958 flags.go:64] FLAG: --http-check-frequency="20s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576064 4958 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576070 4958 flags.go:64] FLAG: --image-credential-provider-config="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576075 4958 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576081 4958 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576087 4958 flags.go:64] FLAG: --image-service-endpoint="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576092 4958 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576098 4958 flags.go:64] FLAG: --kube-api-burst="100" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576104 4958 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576110 4958 flags.go:64] FLAG: --kube-api-qps="50" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576115 4958 flags.go:64] FLAG: --kube-reserved="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576122 4958 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576127 4958 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576133 4958 flags.go:64] FLAG: --kubelet-cgroups="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576137 4958 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576141 4958 flags.go:64] FLAG: --lock-file="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576145 4958 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576151 4958 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576155 4958 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576162 4958 flags.go:64] FLAG: --log-json-split-stream="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576166 4958 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576171 4958 flags.go:64] FLAG: --log-text-split-stream="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576175 4958 flags.go:64] FLAG: --logging-format="text" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576179 4958 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576184 4958 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576188 4958 flags.go:64] FLAG: --manifest-url="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576192 4958 flags.go:64] FLAG: --manifest-url-header="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576199 4958 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576203 4958 flags.go:64] FLAG: --max-open-files="1000000" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576209 4958 flags.go:64] FLAG: --max-pods="110" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576213 4958 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576218 4958 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576222 4958 flags.go:64] FLAG: --memory-manager-policy="None" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576226 4958 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576230 4958 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576234 4958 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576238 4958 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576248 4958 flags.go:64] FLAG: --node-status-max-images="50" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576252 4958 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576258 4958 flags.go:64] FLAG: --oom-score-adj="-999" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576263 4958 flags.go:64] FLAG: --pod-cidr="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576267 4958 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576273 4958 flags.go:64] FLAG: --pod-manifest-path="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576278 4958 flags.go:64] FLAG: --pod-max-pids="-1" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576282 4958 flags.go:64] FLAG: --pods-per-core="0" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576286 4958 flags.go:64] FLAG: --port="10250" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576290 4958 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576294 4958 flags.go:64] FLAG: --provider-id="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576299 4958 flags.go:64] FLAG: --qos-reserved="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576303 4958 flags.go:64] FLAG: --read-only-port="10255" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576307 4958 flags.go:64] FLAG: --register-node="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576311 4958 flags.go:64] FLAG: --register-schedulable="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576315 4958 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576323 4958 flags.go:64] FLAG: --registry-burst="10" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576327 4958 flags.go:64] FLAG: --registry-qps="5" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576331 4958 flags.go:64] FLAG: --reserved-cpus="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576335 4958 flags.go:64] FLAG: --reserved-memory="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576341 4958 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576345 4958 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576350 4958 flags.go:64] FLAG: --rotate-certificates="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576354 4958 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576358 4958 flags.go:64] FLAG: --runonce="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576363 4958 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576370 4958 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576375 4958 flags.go:64] FLAG: --seccomp-default="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576380 4958 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576385 4958 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576390 4958 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576395 4958 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576400 4958 flags.go:64] FLAG: --storage-driver-password="root" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576404 4958 flags.go:64] FLAG: --storage-driver-secure="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576409 4958 flags.go:64] FLAG: --storage-driver-table="stats" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576415 4958 flags.go:64] FLAG: --storage-driver-user="root" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576421 4958 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576425 4958 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576431 4958 flags.go:64] FLAG: --system-cgroups="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576437 4958 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576445 4958 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576450 4958 flags.go:64] FLAG: --tls-cert-file="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576456 4958 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576463 4958 flags.go:64] FLAG: --tls-min-version="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576486 4958 flags.go:64] FLAG: --tls-private-key-file="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576491 4958 flags.go:64] FLAG: --topology-manager-policy="none" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576497 4958 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576502 4958 flags.go:64] FLAG: --topology-manager-scope="container" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576508 4958 flags.go:64] FLAG: --v="2" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576516 4958 flags.go:64] FLAG: --version="false" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576522 4958 flags.go:64] FLAG: --vmodule="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576534 4958 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576539 4958 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576652 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576657 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576662 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576666 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576669 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576673 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576677 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576682 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576687 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576691 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576695 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576698 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576702 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576705 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576709 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576713 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576716 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576720 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576723 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576727 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576730 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576733 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576737 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576740 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576744 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576747 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576751 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576754 4958 feature_gate.go:330] unrecognized feature gate: Example Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576758 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576761 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576765 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576768 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576772 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576775 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576779 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576782 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576786 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576790 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576794 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576797 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576801 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576804 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576808 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576812 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576816 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576820 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576824 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576828 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576831 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576834 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576838 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576841 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576845 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576849 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576853 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576858 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576862 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576866 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576869 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576873 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576876 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576880 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576883 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576887 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576890 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576894 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576899 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576903 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576906 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576911 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.576915 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.576922 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.587685 4958 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.587714 4958 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587837 4958 feature_gate.go:330] unrecognized feature gate: Example Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587850 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587859 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587871 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587879 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587889 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587897 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587905 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587916 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587926 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587935 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587943 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587951 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587959 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587967 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587975 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587983 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587991 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.587999 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588007 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588015 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588022 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588030 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588041 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588051 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588061 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588070 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588078 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588086 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588094 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588103 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588115 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588123 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588131 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588139 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588146 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588155 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588165 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588175 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588185 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588193 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588202 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588211 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588219 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588227 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588234 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588243 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588250 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588260 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588270 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588279 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588288 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588296 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588304 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588312 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588320 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588327 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588335 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588343 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588350 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588358 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588366 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588373 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588381 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588390 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588398 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588405 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588413 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588420 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588428 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588436 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.588448 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588688 4958 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588701 4958 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588709 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588719 4958 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588727 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588736 4958 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588744 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588754 4958 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588766 4958 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588774 4958 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588783 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588796 4958 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588809 4958 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588820 4958 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588833 4958 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588844 4958 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588855 4958 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588865 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588875 4958 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588885 4958 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588895 4958 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588904 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588914 4958 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588925 4958 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588937 4958 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588948 4958 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588958 4958 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588966 4958 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588974 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588982 4958 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588990 4958 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.588998 4958 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589006 4958 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589014 4958 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589022 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589029 4958 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589037 4958 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589044 4958 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589052 4958 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589059 4958 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589067 4958 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589077 4958 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589085 4958 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589094 4958 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589103 4958 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589111 4958 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589119 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589126 4958 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589134 4958 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589142 4958 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589150 4958 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589157 4958 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589165 4958 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589173 4958 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589180 4958 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589188 4958 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589196 4958 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589204 4958 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589211 4958 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589219 4958 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589226 4958 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589234 4958 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589242 4958 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589251 4958 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589259 4958 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589266 4958 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589274 4958 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589281 4958 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589292 4958 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589301 4958 feature_gate.go:330] unrecognized feature gate: Example Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.589310 4958 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.589323 4958 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.589922 4958 server.go:940] "Client rotation is on, will bootstrap in background" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.596995 4958 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.597189 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.598096 4958 server.go:997] "Starting client certificate rotation" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.598138 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.598663 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 06:06:55.469579607 +0000 UTC Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.598784 4958 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 720h38m45.870802084s for next certificate rotation Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.605355 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.608099 4958 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.618792 4958 log.go:25] "Validated CRI v1 runtime API" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.636420 4958 log.go:25] "Validated CRI v1 image API" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.638413 4958 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.641505 4958 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-06-05-23-14-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.641553 4958 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:41 fsType:tmpfs blockSize:0}] Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.667692 4958 manager.go:217] Machine: {Timestamp:2025-12-06 05:28:09.665592215 +0000 UTC m=+0.199363048 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:d8c60597-c05a-4627-8199-844ddb77ec1c BootID:3ae601d9-1e3a-4939-b6d5-fbef7be2f380 Filesystems:[{Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:41 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:ac:d0:d7 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:ac:d0:d7 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:db:d8:a8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:22:7e:72 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2b:69:33 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:a1:6d:7e Speed:-1 Mtu:1496} {Name:eth10 MacAddress:5a:1a:64:7b:d3:86 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:8e:80:3d:9c:0a Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.668392 4958 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.668822 4958 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.669508 4958 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.669825 4958 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.669891 4958 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.670230 4958 topology_manager.go:138] "Creating topology manager with none policy" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.670255 4958 container_manager_linux.go:303] "Creating device plugin manager" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.670710 4958 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.670768 4958 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.671036 4958 state_mem.go:36] "Initialized new in-memory state store" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.671323 4958 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.672967 4958 kubelet.go:418] "Attempting to sync node with API server" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.673003 4958 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.673043 4958 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.673065 4958 kubelet.go:324] "Adding apiserver pod source" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.673084 4958 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.679727 4958 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.679681 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.679690 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.679965 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.680182 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.680448 4958 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.681373 4958 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682194 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682234 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682248 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682263 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682285 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682299 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682313 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682335 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682352 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682367 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682385 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.682398 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.683010 4958 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.683725 4958 server.go:1280] "Started kubelet" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.683874 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.684275 4958 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.684262 4958 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.685277 4958 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 06 05:28:09 crc systemd[1]: Started Kubernetes Kubelet. Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.689628 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.689680 4958 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.689786 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:08:28.161432437 +0000 UTC Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.689851 4958 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 946h40m18.471585958s for next certificate rotation Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.690005 4958 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.690019 4958 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.690081 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.690095 4958 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.690929 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.691100 4958 server.go:460] "Adding debug handlers to kubelet server" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.691117 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.691607 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.691998 4958 factory.go:55] Registering systemd factory Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692041 4958 factory.go:221] Registration of the systemd container factory successfully Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692555 4958 factory.go:153] Registering CRI-O factory Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692591 4958 factory.go:221] Registration of the crio container factory successfully Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692702 4958 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692757 4958 factory.go:103] Registering Raw factory Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.692790 4958 manager.go:1196] Started watching for new ooms in manager Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.692564 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187e8923888db6f8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 05:28:09.683662584 +0000 UTC m=+0.217433377,LastTimestamp:2025-12-06 05:28:09.683662584 +0000 UTC m=+0.217433377,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.693796 4958 manager.go:319] Starting recovery of all containers Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708748 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708814 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708837 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708857 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708876 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708894 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708915 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708935 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708959 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708978 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.708996 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709015 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709035 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709059 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709078 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709096 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709116 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709134 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709153 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709172 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709190 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709207 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709241 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709265 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709288 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709309 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709332 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709395 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709414 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709432 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709453 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709526 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709548 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709566 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709586 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709604 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709622 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709638 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709657 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709677 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709702 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709719 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709738 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709756 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709774 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709793 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709811 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709830 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709850 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709867 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709885 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709904 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709928 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709949 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709978 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.709999 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710016 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710044 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710061 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710078 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710174 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710195 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710213 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710237 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710264 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710285 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710304 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710321 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710338 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710355 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710374 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.710392 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712819 4958 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712863 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712896 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712915 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712935 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712952 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.712972 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713021 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713045 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713064 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713084 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713103 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713121 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713139 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713157 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713176 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713193 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713211 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713235 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713263 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713288 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713312 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713332 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713352 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713386 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713406 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713423 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713442 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713460 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713525 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713545 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713566 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713584 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713612 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713634 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713653 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713674 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713724 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713743 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713764 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713785 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713803 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713823 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713844 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713862 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713882 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713902 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713919 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713936 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713954 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713973 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.713990 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714009 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714029 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714047 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714066 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714084 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714103 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714121 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714140 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714159 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714177 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714196 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714213 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714237 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714260 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714284 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714302 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714322 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714338 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714356 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714375 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714394 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714412 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714434 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714452 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714498 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714517 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714562 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714581 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714599 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714620 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714638 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714655 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714674 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714692 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714712 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714732 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714752 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714770 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714788 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714806 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714824 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714841 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714860 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714878 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714895 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714914 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714931 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714950 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714970 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.714987 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715016 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715033 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715053 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715072 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715091 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715110 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715131 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715151 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715170 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715190 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715207 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715229 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715253 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715276 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715295 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715314 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715332 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715350 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715368 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715386 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715404 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715421 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715439 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715456 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715505 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715525 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715544 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715587 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715609 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715627 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715646 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715664 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715683 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715702 4958 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715720 4958 reconstruct.go:97] "Volume reconstruction finished" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.715734 4958 reconciler.go:26] "Reconciler: start to sync state" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.721438 4958 manager.go:324] Recovery completed Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.733071 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.735248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.735309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.735329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.736061 4958 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.736088 4958 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.736113 4958 state_mem.go:36] "Initialized new in-memory state store" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.747466 4958 policy_none.go:49] "None policy: Start" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.748392 4958 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.748425 4958 state_mem.go:35] "Initializing new in-memory state store" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.757206 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.760414 4958 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.760702 4958 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.760741 4958 kubelet.go:2335] "Starting kubelet main sync loop" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.760927 4958 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 06 05:28:09 crc kubenswrapper[4958]: W1206 05:28:09.761626 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.761675 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.790303 4958 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810163 4958 manager.go:334] "Starting Device Plugin manager" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810250 4958 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810264 4958 server.go:79] "Starting device plugin registration server" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810803 4958 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810817 4958 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.810986 4958 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.811058 4958 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.811066 4958 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.819231 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.861028 4958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.861179 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.862570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.862623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.862642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.862836 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.863201 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.863252 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.863956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864224 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864494 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.864554 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.865981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.866134 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.866428 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.866563 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867334 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867739 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.867792 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868620 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868666 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.868974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.869784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.869805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.869820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.893330 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.911671 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.913208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.913258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.913321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.913355 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:09 crc kubenswrapper[4958]: E1206 05:28:09.914049 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920337 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920443 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.920781 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921048 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921107 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921134 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921179 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:09 crc kubenswrapper[4958]: I1206 05:28:09.921349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023320 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023356 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023339 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023541 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023614 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023654 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023617 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023658 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023822 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023834 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023849 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023937 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.023980 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.024056 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.114172 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.115883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.115941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.115959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.115998 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:10 crc kubenswrapper[4958]: E1206 05:28:10.116756 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.205352 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.228086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.239058 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-0f1fc714e24fec5df4fe9a4d671c49c57bc1a6399df10bf952a74499cb24d89e WatchSource:0}: Error finding container 0f1fc714e24fec5df4fe9a4d671c49c57bc1a6399df10bf952a74499cb24d89e: Status 404 returned error can't find the container with id 0f1fc714e24fec5df4fe9a4d671c49c57bc1a6399df10bf952a74499cb24d89e Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.250663 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.254625 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-115421891feb93b23aab166c2876048c5619e0c01b3145ddeb9ada03a5175af1 WatchSource:0}: Error finding container 115421891feb93b23aab166c2876048c5619e0c01b3145ddeb9ada03a5175af1: Status 404 returned error can't find the container with id 115421891feb93b23aab166c2876048c5619e0c01b3145ddeb9ada03a5175af1 Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.261626 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.267957 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.273449 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-bad3761b80151b65667818aad1759939144e6e7a8f109a7591c8f1e5a6332780 WatchSource:0}: Error finding container bad3761b80151b65667818aad1759939144e6e7a8f109a7591c8f1e5a6332780: Status 404 returned error can't find the container with id bad3761b80151b65667818aad1759939144e6e7a8f109a7591c8f1e5a6332780 Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.280710 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-de5a0b0c803b8b3c4da139220fb0a1f8c0b3eb89b9abe298e1a36e0b4e716831 WatchSource:0}: Error finding container de5a0b0c803b8b3c4da139220fb0a1f8c0b3eb89b9abe298e1a36e0b4e716831: Status 404 returned error can't find the container with id de5a0b0c803b8b3c4da139220fb0a1f8c0b3eb89b9abe298e1a36e0b4e716831 Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.294559 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-992811ddeccf415e9841c79cbc001e67af2730af546f0756bc6a27d92e8bc730 WatchSource:0}: Error finding container 992811ddeccf415e9841c79cbc001e67af2730af546f0756bc6a27d92e8bc730: Status 404 returned error can't find the container with id 992811ddeccf415e9841c79cbc001e67af2730af546f0756bc6a27d92e8bc730 Dec 06 05:28:10 crc kubenswrapper[4958]: E1206 05:28:10.294649 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.517275 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.518860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.518899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.518911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.518936 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:10 crc kubenswrapper[4958]: E1206 05:28:10.519378 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.677214 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:10 crc kubenswrapper[4958]: E1206 05:28:10.677336 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.684662 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.765817 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"992811ddeccf415e9841c79cbc001e67af2730af546f0756bc6a27d92e8bc730"} Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.767583 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"de5a0b0c803b8b3c4da139220fb0a1f8c0b3eb89b9abe298e1a36e0b4e716831"} Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.769252 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"bad3761b80151b65667818aad1759939144e6e7a8f109a7591c8f1e5a6332780"} Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.770523 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"115421891feb93b23aab166c2876048c5619e0c01b3145ddeb9ada03a5175af1"} Dec 06 05:28:10 crc kubenswrapper[4958]: I1206 05:28:10.771587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"0f1fc714e24fec5df4fe9a4d671c49c57bc1a6399df10bf952a74499cb24d89e"} Dec 06 05:28:10 crc kubenswrapper[4958]: W1206 05:28:10.962622 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:10 crc kubenswrapper[4958]: E1206 05:28:10.962932 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:11 crc kubenswrapper[4958]: W1206 05:28:11.071981 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:11 crc kubenswrapper[4958]: E1206 05:28:11.072062 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:11 crc kubenswrapper[4958]: E1206 05:28:11.095333 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Dec 06 05:28:11 crc kubenswrapper[4958]: W1206 05:28:11.111630 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:11 crc kubenswrapper[4958]: E1206 05:28:11.111718 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.20:6443: connect: connection refused" logger="UnhandledError" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.319936 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.321866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.322028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.322048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.322084 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:11 crc kubenswrapper[4958]: E1206 05:28:11.322759 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.20:6443: connect: connection refused" node="crc" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.685546 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.20:6443: connect: connection refused Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.783955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.784019 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.784041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.792209 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb" exitCode=0 Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.792318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.792391 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.794551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.794605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.794625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.796459 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.797268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.797301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.797313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.798005 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779" exitCode=0 Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.798064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.798160 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.800912 4958 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3" exitCode=0 Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.800983 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.801065 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.802172 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.802208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.802225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.803295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.803336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.803352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.804810 4958 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531" exitCode=0 Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.804847 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531"} Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.804954 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.805703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.805761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:11 crc kubenswrapper[4958]: I1206 05:28:11.805772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.810087 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.810155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.810169 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.810288 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.811289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.811319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.811328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.818230 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.818704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.819090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.819112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.819122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.822189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.822218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.822229 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.822241 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.826974 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904" exitCode=0 Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.827048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.827222 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.828545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.828593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.828606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.831854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade"} Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.831927 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.836642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.836696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.836708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.923170 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.925064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.925128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.925145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:12 crc kubenswrapper[4958]: I1206 05:28:12.925183 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.052955 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.836944 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a"} Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.837014 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.838362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.838486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.838558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839092 4958 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5" exitCode=0 Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839166 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839209 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839325 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839324 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5"} Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839331 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839674 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.839865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.840648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.840671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.840682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.840658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.841103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.841186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.841584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.841610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.841623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:13 crc kubenswrapper[4958]: I1206 05:28:13.957854 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.849444 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e"} Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.849566 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.849685 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.849569 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24"} Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.850599 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.850626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520"} Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.850656 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c"} Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.851167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.851244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.851271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.852019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.852081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:14 crc kubenswrapper[4958]: I1206 05:28:14.852099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.864324 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe"} Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.864445 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.864372 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.867801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.868033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.868243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.870230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.870446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:15 crc kubenswrapper[4958]: I1206 05:28:15.870683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.867844 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.869164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.869372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.869582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.965932 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.966181 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.967745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.967798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:16 crc kubenswrapper[4958]: I1206 05:28:16.967822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.336441 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.336794 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.338787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.338829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.338841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.730281 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.730619 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.732523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.732577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:18 crc kubenswrapper[4958]: I1206 05:28:18.732593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:19 crc kubenswrapper[4958]: E1206 05:28:19.819438 4958 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 06 05:28:20 crc kubenswrapper[4958]: I1206 05:28:20.621180 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:20 crc kubenswrapper[4958]: I1206 05:28:20.621509 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:20 crc kubenswrapper[4958]: I1206 05:28:20.623227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:20 crc kubenswrapper[4958]: I1206 05:28:20.623279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:20 crc kubenswrapper[4958]: I1206 05:28:20.623293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.520611 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.521188 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.526731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.526828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.526849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.836416 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.836776 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.838691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.838777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.838809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.992517 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.992703 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.994886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.994967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.994986 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:21 crc kubenswrapper[4958]: I1206 05:28:21.998560 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:22 crc kubenswrapper[4958]: W1206 05:28:22.634081 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.634219 4958 trace.go:236] Trace[1217508581]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 05:28:12.633) (total time: 10001ms): Dec 06 05:28:22 crc kubenswrapper[4958]: Trace[1217508581]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10000ms (05:28:22.634) Dec 06 05:28:22 crc kubenswrapper[4958]: Trace[1217508581]: [10.001044675s] [10.001044675s] END Dec 06 05:28:22 crc kubenswrapper[4958]: E1206 05:28:22.634252 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.685872 4958 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 06 05:28:22 crc kubenswrapper[4958]: E1206 05:28:22.697616 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.890565 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.893452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.893524 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.893536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:22 crc kubenswrapper[4958]: I1206 05:28:22.894629 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:22 crc kubenswrapper[4958]: E1206 05:28:22.926846 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 06 05:28:23 crc kubenswrapper[4958]: W1206 05:28:23.308975 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.309065 4958 trace.go:236] Trace[1811403693]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 05:28:13.307) (total time: 10001ms): Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[1811403693]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:28:23.308) Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[1811403693]: [10.001759689s] [10.001759689s] END Dec 06 05:28:23 crc kubenswrapper[4958]: E1206 05:28:23.309087 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.621292 4958 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.621375 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 05:28:23 crc kubenswrapper[4958]: W1206 05:28:23.785596 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.785769 4958 trace.go:236] Trace[233324857]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 05:28:13.783) (total time: 10002ms): Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[233324857]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:28:23.785) Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[233324857]: [10.002074225s] [10.002074225s] END Dec 06 05:28:23 crc kubenswrapper[4958]: E1206 05:28:23.785835 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.888675 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.890015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.890081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.890100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.958258 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.958362 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:28:23 crc kubenswrapper[4958]: W1206 05:28:23.971197 4958 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 06 05:28:23 crc kubenswrapper[4958]: I1206 05:28:23.971292 4958 trace.go:236] Trace[902925384]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (06-Dec-2025 05:28:13.969) (total time: 10001ms): Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[902925384]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (05:28:23.971) Dec 06 05:28:23 crc kubenswrapper[4958]: Trace[902925384]: [10.001486831s] [10.001486831s] END Dec 06 05:28:23 crc kubenswrapper[4958]: E1206 05:28:23.971312 4958 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 06 05:28:24 crc kubenswrapper[4958]: I1206 05:28:24.260144 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 06 05:28:24 crc kubenswrapper[4958]: I1206 05:28:24.260237 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 06 05:28:26 crc kubenswrapper[4958]: I1206 05:28:26.127785 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:26 crc kubenswrapper[4958]: I1206 05:28:26.129521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:26 crc kubenswrapper[4958]: I1206 05:28:26.129603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:26 crc kubenswrapper[4958]: I1206 05:28:26.129628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:26 crc kubenswrapper[4958]: I1206 05:28:26.129667 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:26 crc kubenswrapper[4958]: E1206 05:28:26.135314 4958 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.318430 4958 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.686279 4958 apiserver.go:52] "Watching apiserver" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.690674 4958 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.692688 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.694755 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:27 crc kubenswrapper[4958]: E1206 05:28:27.694916 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.695098 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.695197 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.695411 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:27 crc kubenswrapper[4958]: E1206 05:28:27.695505 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.695510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.695718 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:27 crc kubenswrapper[4958]: E1206 05:28:27.697129 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.698758 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.698777 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.700452 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.700720 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.700781 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.700882 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.700994 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.701058 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.702445 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.765719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.786834 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.791313 4958 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.798105 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.811298 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.827183 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.841997 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:27 crc kubenswrapper[4958]: I1206 05:28:27.855358 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.850236 4958 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.964750 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.971719 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.978184 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.979144 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 05:28:28 crc kubenswrapper[4958]: I1206 05:28:28.993600 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.003011 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.018802 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.029928 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.042377 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.052268 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.065938 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.078976 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.093367 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.103798 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.116531 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.131101 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.254667 4958 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.286192 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.343198 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355809 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355879 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355902 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355928 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355968 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.355988 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356052 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356075 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356096 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356117 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356138 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356222 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356242 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356284 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356304 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356330 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356350 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356420 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356503 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356526 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356550 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356575 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356597 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356619 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356664 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356685 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356710 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356730 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356748 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356781 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356804 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356825 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356867 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356891 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356911 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356960 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357002 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357042 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357062 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357087 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357131 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357152 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357175 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357221 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357244 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357269 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357317 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357340 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357402 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357426 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357455 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357494 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357519 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357540 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357582 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357647 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357669 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357690 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357711 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357775 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357795 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357817 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357837 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357861 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357886 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357907 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357927 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357978 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358000 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358048 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358072 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358159 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356383 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356380 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356423 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356622 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356642 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356667 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356865 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.356979 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357141 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357172 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357221 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357368 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357579 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357601 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357646 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357815 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.357957 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358166 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358436 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358600 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358801 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358847 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358996 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358996 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359099 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359135 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359224 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359489 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359534 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.359757 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360004 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360072 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360251 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360401 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360655 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360702 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360718 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360813 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360848 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.360948 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.361070 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.361091 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.361191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.361191 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.361287 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.362012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.358181 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363324 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363407 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363441 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363507 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363535 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363569 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363628 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363661 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363702 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363735 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363769 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363833 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363859 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363891 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363921 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363951 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364013 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364070 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364106 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364150 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364217 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364282 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364313 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364341 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364367 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364396 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364427 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364546 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364579 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364673 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364778 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364813 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364872 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364934 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364962 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364994 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365027 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365087 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365117 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365163 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365197 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365232 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365266 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365294 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365358 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365390 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.366934 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.366979 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.367027 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.367059 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.367095 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.367124 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369539 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369584 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369608 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369673 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369699 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369729 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369759 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369820 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369882 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369919 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369950 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.369981 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370012 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370042 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370069 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370147 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370229 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370328 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370365 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370395 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370682 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370714 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370742 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370768 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370837 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370940 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.370977 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371013 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371151 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371171 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371192 4958 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371206 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371221 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371241 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371259 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371274 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371289 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371310 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371325 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371338 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371352 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371371 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371384 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371398 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371413 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371431 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371446 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371503 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371520 4958 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371538 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371554 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371568 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371588 4958 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371603 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371618 4958 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371633 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371651 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371665 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371680 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371694 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371712 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371726 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371740 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371756 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371776 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371790 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371804 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371822 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371839 4958 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371852 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371865 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371885 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371900 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371913 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371929 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371948 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.371963 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372876 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372897 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372926 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372942 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372954 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372965 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.362446 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.362872 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.362912 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363248 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363647 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363652 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.363960 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364499 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364758 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364827 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364961 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.364988 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365151 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365114 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365216 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365311 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365465 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365461 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.365823 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372682 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.373241 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.372957 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.373549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.373674 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.373749 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374148 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374348 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374543 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374648 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374707 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.374864 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.375057 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.373487 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.375981 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.376282 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.381679 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.381910 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.382086 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.382371 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.382732 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.382964 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.382976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.384710 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.385434 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.385522 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.386127 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.386616 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.386627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.387096 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.387104 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.387541 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.388199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.390453 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.390856 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.391112 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.391405 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.391651 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.391810 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.392019 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.392156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.392302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.392809 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.392993 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.393047 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.393224 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.386920 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.393551 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.393777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.393881 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.394012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.394515 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.398792 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.398868 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.400869 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.401592 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.401945 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.401958 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.402219 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.402971 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.403839 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.404417 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.404544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.404714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.404825 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.404989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.405251 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.405420 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.406728 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.407390 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.407692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.409964 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.412438 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.412526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.412838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.412926 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.412993 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.413237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.413323 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.413350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.416710 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.417795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.417936 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.417946 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.417954 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.418072 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.418637 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.419772 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.419829 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.419890 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.419896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.420249 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.420283 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.420749 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.420760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.420831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.421053 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:29.920512495 +0000 UTC m=+20.454283258 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421509 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.421632 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:29.921608953 +0000 UTC m=+20.455379716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421663 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421711 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.421966 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.422074 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:28:29.922050346 +0000 UTC m=+20.455821109 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.422085 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.422554 4958 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.423298 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.424258 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.426720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.427226 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.429673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.430660 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.430806 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.431822 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.431910 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.432850 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.439678 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.440854 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.440903 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.440923 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.440955 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.441005 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:29.940978733 +0000 UTC m=+20.474749496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.440993 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.444132 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.446673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.447282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.449555 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.450022 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.450666 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.450756 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.450823 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.450953 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:29.950926474 +0000 UTC m=+20.484697437 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.452785 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.454373 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.464143 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.466614 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.474559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.474770 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.474881 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.474951 4958 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475008 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475080 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475277 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475354 4958 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475409 4958 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475521 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475602 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475691 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475766 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475841 4958 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475899 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475950 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476002 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476060 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476112 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476163 4958 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476230 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476302 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476375 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476459 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476558 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476626 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476694 4958 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476766 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476850 4958 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.476931 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477013 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477093 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477170 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477261 4958 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477347 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477426 4958 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477526 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477602 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477678 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477842 4958 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.477916 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478001 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478077 4958 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478147 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478207 4958 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478264 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478338 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478399 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478456 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478539 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478614 4958 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478672 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478726 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478779 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478829 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478881 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478937 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478995 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479075 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479155 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479232 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479313 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479386 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479492 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479579 4958 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479653 4958 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479725 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479802 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479875 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479949 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480028 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480122 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480212 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480323 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480408 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480515 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480604 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480683 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480772 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480846 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480922 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.480999 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481071 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481142 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481226 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481302 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481376 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481492 4958 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481579 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481644 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481708 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481780 4958 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481861 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.481940 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482018 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482105 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482171 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482241 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482321 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482390 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482463 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482581 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482660 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482738 4958 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482823 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482892 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.482951 4958 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483004 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483055 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483106 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483157 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483212 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483269 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483323 4958 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483376 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483434 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483552 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483611 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483676 4958 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483740 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483795 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483847 4958 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483915 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.483981 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484036 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484112 4958 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484182 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484240 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484335 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484406 4958 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484500 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484585 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484669 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484750 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484828 4958 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484914 4958 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.479993 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475797 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.478523 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.484988 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.485258 4958 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.485282 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.475610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.500848 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.514551 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.520055 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.521086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.529856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.535061 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: source /etc/kubernetes/apiserver-url.env Dec 06 05:28:29 crc kubenswrapper[4958]: else Dec 06 05:28:29 crc kubenswrapper[4958]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 06 05:28:29 crc kubenswrapper[4958]: exit 1 Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.536873 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.549971 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.550156 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5ktnh"] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.550640 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5mx5v"] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.550753 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.550279 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f "/env/_master" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: source "/env/_master" Dec 06 05:28:29 crc kubenswrapper[4958]: set +o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 06 05:28:29 crc kubenswrapper[4958]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 06 05:28:29 crc kubenswrapper[4958]: ho_enable="--enable-hybrid-overlay" Dec 06 05:28:29 crc kubenswrapper[4958]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 06 05:28:29 crc kubenswrapper[4958]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 06 05:28:29 crc kubenswrapper[4958]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-host=127.0.0.1 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-port=9743 \ Dec 06 05:28:29 crc kubenswrapper[4958]: ${ho_enable} \ Dec 06 05:28:29 crc kubenswrapper[4958]: --enable-interconnect \ Dec 06 05:28:29 crc kubenswrapper[4958]: --disable-approver \ Dec 06 05:28:29 crc kubenswrapper[4958]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --wait-for-kubernetes-api=200s \ Dec 06 05:28:29 crc kubenswrapper[4958]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --loglevel="${LOGLEVEL}" Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.551436 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.551702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553049 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553165 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553297 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553419 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553486 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553591 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553625 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.553598 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.556228 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f "/env/_master" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: source "/env/_master" Dec 06 05:28:29 crc kubenswrapper[4958]: set +o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --disable-webhook \ Dec 06 05:28:29 crc kubenswrapper[4958]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --loglevel="${LOGLEVEL}" Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.557282 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.565197 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.582266 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.585749 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.592304 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.601065 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.609868 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.617141 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.630081 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.643195 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.655957 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.664569 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.679507 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ab56cd4-7270-4252-b6e6-cbc102b84d97-hosts-file\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686807 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c13528c0-da5d-4d55-9155-2c29c33edfc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v95sc\" (UniqueName: \"kubernetes.io/projected/c13528c0-da5d-4d55-9155-2c29c33edfc4-kube-api-access-v95sc\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c13528c0-da5d-4d55-9155-2c29c33edfc4-rootfs\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686932 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c13528c0-da5d-4d55-9155-2c29c33edfc4-proxy-tls\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.686953 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-448jg\" (UniqueName: \"kubernetes.io/projected/4ab56cd4-7270-4252-b6e6-cbc102b84d97-kube-api-access-448jg\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.691006 4958 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.693844 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.700536 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.710719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.717028 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.725576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.739239 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.760980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.760993 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.761073 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.761418 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.761322 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.761239 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.765007 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.765909 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.767325 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.768150 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.769291 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.769885 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.770520 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.771447 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.772108 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.773016 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.773647 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.774675 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.775210 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.775777 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.776670 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.777192 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.778107 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.778564 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.779131 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.779843 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.780225 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.780702 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.781697 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.782156 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.783162 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.783624 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.784247 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.785397 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.785916 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.786953 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787453 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c13528c0-da5d-4d55-9155-2c29c33edfc4-rootfs\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787827 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-448jg\" (UniqueName: \"kubernetes.io/projected/4ab56cd4-7270-4252-b6e6-cbc102b84d97-kube-api-access-448jg\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787878 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c13528c0-da5d-4d55-9155-2c29c33edfc4-proxy-tls\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ab56cd4-7270-4252-b6e6-cbc102b84d97-hosts-file\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787931 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c13528c0-da5d-4d55-9155-2c29c33edfc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v95sc\" (UniqueName: \"kubernetes.io/projected/c13528c0-da5d-4d55-9155-2c29c33edfc4-kube-api-access-v95sc\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.788082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/4ab56cd4-7270-4252-b6e6-cbc102b84d97-hosts-file\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.787889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/c13528c0-da5d-4d55-9155-2c29c33edfc4-rootfs\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.788675 4958 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.788845 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.788722 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c13528c0-da5d-4d55-9155-2c29c33edfc4-mcd-auth-proxy-config\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.790890 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.791119 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.791370 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c13528c0-da5d-4d55-9155-2c29c33edfc4-proxy-tls\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.792338 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.793167 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.795626 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.797105 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.797915 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.799528 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.800572 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.801934 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.802262 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.803965 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.805082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-448jg\" (UniqueName: \"kubernetes.io/projected/4ab56cd4-7270-4252-b6e6-cbc102b84d97-kube-api-access-448jg\") pod \"node-resolver-5mx5v\" (UID: \"4ab56cd4-7270-4252-b6e6-cbc102b84d97\") " pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.805221 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.806814 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.807318 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v95sc\" (UniqueName: \"kubernetes.io/projected/c13528c0-da5d-4d55-9155-2c29c33edfc4-kube-api-access-v95sc\") pod \"machine-config-daemon-5ktnh\" (UID: \"c13528c0-da5d-4d55-9155-2c29c33edfc4\") " pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.807581 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.808981 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.809811 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.811755 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.812385 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.813681 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.814115 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.814619 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.815689 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.816152 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.817137 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.825628 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.837531 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.848915 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.865068 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.868835 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.873562 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.879695 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5mx5v" Dec 06 05:28:29 crc kubenswrapper[4958]: W1206 05:28:29.881285 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13528c0_da5d_4d55_9155_2c29c33edfc4.slice/crio-cbb4e2f8a0d15d4751ad1f100bc33592073ce156fb7abb427f7932fc1bdac049 WatchSource:0}: Error finding container cbb4e2f8a0d15d4751ad1f100bc33592073ce156fb7abb427f7932fc1bdac049: Status 404 returned error can't find the container with id cbb4e2f8a0d15d4751ad1f100bc33592073ce156fb7abb427f7932fc1bdac049 Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.884407 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v95sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.886892 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v95sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.888071 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:28:29 crc kubenswrapper[4958]: W1206 05:28:29.889875 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ab56cd4_7270_4252_b6e6_cbc102b84d97.slice/crio-00bd0e27aeb8b83a27f8ceeb194d564d8a7aa78ac0f55df9f49360a75a15f505 WatchSource:0}: Error finding container 00bd0e27aeb8b83a27f8ceeb194d564d8a7aa78ac0f55df9f49360a75a15f505: Status 404 returned error can't find the container with id 00bd0e27aeb8b83a27f8ceeb194d564d8a7aa78ac0f55df9f49360a75a15f505 Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.892736 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 06 05:28:29 crc kubenswrapper[4958]: set -uo pipefail Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 06 05:28:29 crc kubenswrapper[4958]: HOSTS_FILE="/etc/hosts" Dec 06 05:28:29 crc kubenswrapper[4958]: TEMP_FILE="/etc/hosts.tmp" Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Make a temporary file with the old hosts file's attributes. Dec 06 05:28:29 crc kubenswrapper[4958]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 06 05:28:29 crc kubenswrapper[4958]: echo "Failed to preserve hosts file. Exiting." Dec 06 05:28:29 crc kubenswrapper[4958]: exit 1 Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: while true; do Dec 06 05:28:29 crc kubenswrapper[4958]: declare -A svc_ips Dec 06 05:28:29 crc kubenswrapper[4958]: for svc in "${services[@]}"; do Dec 06 05:28:29 crc kubenswrapper[4958]: # Fetch service IP from cluster dns if present. We make several tries Dec 06 05:28:29 crc kubenswrapper[4958]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 06 05:28:29 crc kubenswrapper[4958]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 06 05:28:29 crc kubenswrapper[4958]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 06 05:28:29 crc kubenswrapper[4958]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 06 05:28:29 crc kubenswrapper[4958]: for i in ${!cmds[*]} Dec 06 05:28:29 crc kubenswrapper[4958]: do Dec 06 05:28:29 crc kubenswrapper[4958]: ips=($(eval "${cmds[i]}")) Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: svc_ips["${svc}"]="${ips[@]}" Dec 06 05:28:29 crc kubenswrapper[4958]: break Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Update /etc/hosts only if we get valid service IPs Dec 06 05:28:29 crc kubenswrapper[4958]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 06 05:28:29 crc kubenswrapper[4958]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 06 05:28:29 crc kubenswrapper[4958]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 06 05:28:29 crc kubenswrapper[4958]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: continue Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Append resolver entries for services Dec 06 05:28:29 crc kubenswrapper[4958]: rc=0 Dec 06 05:28:29 crc kubenswrapper[4958]: for svc in "${!svc_ips[@]}"; do Dec 06 05:28:29 crc kubenswrapper[4958]: for ip in ${svc_ips[${svc}]}; do Dec 06 05:28:29 crc kubenswrapper[4958]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ $rc -ne 0 ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: continue Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 06 05:28:29 crc kubenswrapper[4958]: # Replace /etc/hosts with our modified version if needed Dec 06 05:28:29 crc kubenswrapper[4958]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 06 05:28:29 crc kubenswrapper[4958]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: unset svc_ips Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-448jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-5mx5v_openshift-dns(4ab56cd4-7270-4252-b6e6-cbc102b84d97): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.895713 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-5mx5v" podUID="4ab56cd4-7270-4252-b6e6-cbc102b84d97" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.903352 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5mx5v" event={"ID":"4ab56cd4-7270-4252-b6e6-cbc102b84d97","Type":"ContainerStarted","Data":"00bd0e27aeb8b83a27f8ceeb194d564d8a7aa78ac0f55df9f49360a75a15f505"} Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.904457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"cbb4e2f8a0d15d4751ad1f100bc33592073ce156fb7abb427f7932fc1bdac049"} Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.904484 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/bin/bash -c #!/bin/bash Dec 06 05:28:29 crc kubenswrapper[4958]: set -uo pipefail Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 06 05:28:29 crc kubenswrapper[4958]: HOSTS_FILE="/etc/hosts" Dec 06 05:28:29 crc kubenswrapper[4958]: TEMP_FILE="/etc/hosts.tmp" Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Make a temporary file with the old hosts file's attributes. Dec 06 05:28:29 crc kubenswrapper[4958]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 06 05:28:29 crc kubenswrapper[4958]: echo "Failed to preserve hosts file. Exiting." Dec 06 05:28:29 crc kubenswrapper[4958]: exit 1 Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: while true; do Dec 06 05:28:29 crc kubenswrapper[4958]: declare -A svc_ips Dec 06 05:28:29 crc kubenswrapper[4958]: for svc in "${services[@]}"; do Dec 06 05:28:29 crc kubenswrapper[4958]: # Fetch service IP from cluster dns if present. We make several tries Dec 06 05:28:29 crc kubenswrapper[4958]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 06 05:28:29 crc kubenswrapper[4958]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 06 05:28:29 crc kubenswrapper[4958]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 06 05:28:29 crc kubenswrapper[4958]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 06 05:28:29 crc kubenswrapper[4958]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 06 05:28:29 crc kubenswrapper[4958]: for i in ${!cmds[*]} Dec 06 05:28:29 crc kubenswrapper[4958]: do Dec 06 05:28:29 crc kubenswrapper[4958]: ips=($(eval "${cmds[i]}")) Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: svc_ips["${svc}"]="${ips[@]}" Dec 06 05:28:29 crc kubenswrapper[4958]: break Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Update /etc/hosts only if we get valid service IPs Dec 06 05:28:29 crc kubenswrapper[4958]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 06 05:28:29 crc kubenswrapper[4958]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 06 05:28:29 crc kubenswrapper[4958]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 06 05:28:29 crc kubenswrapper[4958]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: continue Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # Append resolver entries for services Dec 06 05:28:29 crc kubenswrapper[4958]: rc=0 Dec 06 05:28:29 crc kubenswrapper[4958]: for svc in "${!svc_ips[@]}"; do Dec 06 05:28:29 crc kubenswrapper[4958]: for ip in ${svc_ips[${svc}]}; do Dec 06 05:28:29 crc kubenswrapper[4958]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ $rc -ne 0 ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: continue Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 06 05:28:29 crc kubenswrapper[4958]: # Replace /etc/hosts with our modified version if needed Dec 06 05:28:29 crc kubenswrapper[4958]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 06 05:28:29 crc kubenswrapper[4958]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: sleep 60 & wait Dec 06 05:28:29 crc kubenswrapper[4958]: unset svc_ips Dec 06 05:28:29 crc kubenswrapper[4958]: done Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-448jg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-5mx5v_openshift-dns(4ab56cd4-7270-4252-b6e6-cbc102b84d97): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.905310 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"0ad6173a4d802ef7ca900d56d1e5784752c9d1877bda7cee6666fc4688b1708c"} Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.905723 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-5mx5v" podUID="4ab56cd4-7270-4252-b6e6-cbc102b84d97" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.906235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"97fdce33e5c195f7bab4803e37e1747789227d81f6b5b0323530ac25e7755fce"} Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.907336 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.18.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v95sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.907437 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cef897d9556053127c5825c8f15fc1e185a1e27dbd3a187cdaea8efb0b05675c"} Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.907580 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f "/env/_master" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: source "/env/_master" Dec 06 05:28:29 crc kubenswrapper[4958]: set +o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 06 05:28:29 crc kubenswrapper[4958]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 06 05:28:29 crc kubenswrapper[4958]: ho_enable="--enable-hybrid-overlay" Dec 06 05:28:29 crc kubenswrapper[4958]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 06 05:28:29 crc kubenswrapper[4958]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 06 05:28:29 crc kubenswrapper[4958]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-host=127.0.0.1 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --webhook-port=9743 \ Dec 06 05:28:29 crc kubenswrapper[4958]: ${ho_enable} \ Dec 06 05:28:29 crc kubenswrapper[4958]: --enable-interconnect \ Dec 06 05:28:29 crc kubenswrapper[4958]: --disable-approver \ Dec 06 05:28:29 crc kubenswrapper[4958]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --wait-for-kubernetes-api=200s \ Dec 06 05:28:29 crc kubenswrapper[4958]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --loglevel="${LOGLEVEL}" Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.907875 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rczfb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-4ln5h_openshift-network-operator(d75a4c96-2883-4a0b-bab2-0fab2b6c0b49): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.909117 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-4ln5h" podUID="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.909313 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v95sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.909378 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,Command:[/bin/bash -c #!/bin/bash Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: source /etc/kubernetes/apiserver-url.env Dec 06 05:28:29 crc kubenswrapper[4958]: else Dec 06 05:28:29 crc kubenswrapper[4958]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 06 05:28:29 crc kubenswrapper[4958]: exit 1 Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.18.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b97554198294bf544fbc116c94a0a1fb2ec8a4de0e926bf9d9e320135f0bee6f,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:23f833d3738d68706eb2f2868bd76bd71cee016cffa6faf5f045a60cc8c6eddd,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8048f1cb0be521f09749c0a489503cd56d85b68c6ca93380e082cfd693cd97a8,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5dbf844e49bb46b78586930149e5e5f5dc121014c8afd10fe36f3651967cc256,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rdwmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-58b4c7f79c-55gtf_openshift-network-operator(37a5e44f-9a88-4405-be8a-b645485e7312): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.909969 4958 kuberuntime_manager.go:1274] "Unhandled Error" err=< Dec 06 05:28:29 crc kubenswrapper[4958]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2,Command:[/bin/bash -c set -xe Dec 06 05:28:29 crc kubenswrapper[4958]: if [[ -f "/env/_master" ]]; then Dec 06 05:28:29 crc kubenswrapper[4958]: set -o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: source "/env/_master" Dec 06 05:28:29 crc kubenswrapper[4958]: set +o allexport Dec 06 05:28:29 crc kubenswrapper[4958]: fi Dec 06 05:28:29 crc kubenswrapper[4958]: Dec 06 05:28:29 crc kubenswrapper[4958]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 06 05:28:29 crc kubenswrapper[4958]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 06 05:28:29 crc kubenswrapper[4958]: --disable-webhook \ Dec 06 05:28:29 crc kubenswrapper[4958]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 06 05:28:29 crc kubenswrapper[4958]: --loglevel="${LOGLEVEL}" Dec 06 05:28:29 crc kubenswrapper[4958]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2kz5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000470000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-vrzqb_openshift-network-node-identity(ef543e1b-8068-4ea3-b32a-61027b32e95d): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 06 05:28:29 crc kubenswrapper[4958]: > logger="UnhandledError" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.910948 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" podUID="37a5e44f-9a88-4405-be8a-b645485e7312" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.911040 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.911688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-vrzqb" podUID="ef543e1b-8068-4ea3-b32a-61027b32e95d" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.912210 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.922227 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.923678 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-cxvtt"] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.927806 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-wr7h5"] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.928220 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wr7h5" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.928665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.931458 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.931705 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.931795 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.931935 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.932222 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.932309 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.932791 4958 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.932982 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.943655 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.952978 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.966166 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.977537 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.988993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.989079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.989162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989210 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:28:30.989179756 +0000 UTC m=+21.522950529 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989289 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989309 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989324 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989375 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:30.98935802 +0000 UTC m=+21.523128803 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989420 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989429 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989450 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:30.989442352 +0000 UTC m=+21.523213135 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989455 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989500 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989562 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:30.989546455 +0000 UTC m=+21.523317248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.989301 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.989626 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989731 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: E1206 05:28:29.989779 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:30.989765511 +0000 UTC m=+21.523536334 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:29 crc kubenswrapper[4958]: I1206 05:28:29.991398 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.005035 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.015811 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.027532 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.049400 4958 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.050041 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.058921 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.066352 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.080242 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.090976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-hostroot\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzwkj\" (UniqueName: \"kubernetes.io/projected/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-kube-api-access-pzwkj\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091127 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cni-binary-copy\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-bin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091178 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-k8s-cni-cncf-io\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-system-cni-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091212 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091437 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091530 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cnibin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-os-release\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-kubelet\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-multus\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091632 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-etc-kubernetes\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-os-release\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-conf-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-netns\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-multus-certs\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091787 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-778ks\" (UniqueName: \"kubernetes.io/projected/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-kube-api-access-778ks\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091808 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-binary-copy\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.091843 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-daemon-config\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.092244 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cnibin\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.092370 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-system-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.092445 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-socket-dir-parent\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.094808 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.105018 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.118299 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.132718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.150861 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193243 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cni-binary-copy\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193291 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-bin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-k8s-cni-cncf-io\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193327 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-system-cni-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193400 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cnibin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-os-release\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-kubelet\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193461 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-multus\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-etc-kubernetes\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193511 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-os-release\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-conf-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-netns\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193578 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-multus-certs\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-778ks\" (UniqueName: \"kubernetes.io/projected/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-kube-api-access-778ks\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193612 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-binary-copy\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193626 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193656 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-daemon-config\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-etc-kubernetes\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193687 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cnibin\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-system-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193720 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-k8s-cni-cncf-io\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cnibin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193643 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193734 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-socket-dir-parent\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-kubelet\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193818 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-hostroot\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193850 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzwkj\" (UniqueName: \"kubernetes.io/projected/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-kube-api-access-pzwkj\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-multus\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cnibin\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193997 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-os-release\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194001 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-os-release\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-var-lib-cni-bin\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194027 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-hostroot\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194022 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-system-cni-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.193776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-socket-dir-parent\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-system-cni-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194092 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-multus-certs\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194084 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-conf-dir\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194107 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-host-run-netns\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-cni-binary-copy\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-tuning-conf-dir\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194549 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-multus-daemon-config\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194533 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-binary-copy\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.194842 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.223080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-778ks\" (UniqueName: \"kubernetes.io/projected/fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7-kube-api-access-778ks\") pod \"multus-wr7h5\" (UID: \"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\") " pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.240202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzwkj\" (UniqueName: \"kubernetes.io/projected/48b90387-4ea9-47a3-8aa1-b8b19c5ed186-kube-api-access-pzwkj\") pod \"multus-additional-cni-plugins-cxvtt\" (UID: \"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\") " pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.246038 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-wr7h5" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.252497 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" Dec 06 05:28:30 crc kubenswrapper[4958]: W1206 05:28:30.262811 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfcf2a0b3_cf73_4640_8e5e_c3b6a80beef7.slice/crio-73441dd0c715783a28d862804e57a3aa4be8c0081e56475ba541f539619feaed WatchSource:0}: Error finding container 73441dd0c715783a28d862804e57a3aa4be8c0081e56475ba541f539619feaed: Status 404 returned error can't find the container with id 73441dd0c715783a28d862804e57a3aa4be8c0081e56475ba541f539619feaed Dec 06 05:28:30 crc kubenswrapper[4958]: W1206 05:28:30.272177 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48b90387_4ea9_47a3_8aa1_b8b19c5ed186.slice/crio-6fc49784feb0c06872733e05e103f76007509641b9a003f7dce7c546060cfa97 WatchSource:0}: Error finding container 6fc49784feb0c06872733e05e103f76007509641b9a003f7dce7c546060cfa97: Status 404 returned error can't find the container with id 6fc49784feb0c06872733e05e103f76007509641b9a003f7dce7c546060cfa97 Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.298156 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f4swt"] Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.300377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.303184 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.303344 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.307663 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.328764 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.347244 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.367443 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.386951 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396325 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396434 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396643 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396746 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5mzl\" (UniqueName: \"kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.396931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397007 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397111 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397175 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397225 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397310 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.397379 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.415581 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.456206 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.491738 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498097 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498129 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498148 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5mzl\" (UniqueName: \"kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498172 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498232 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498249 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498267 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498281 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498296 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498354 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498563 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498596 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498602 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498631 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498744 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498770 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.498835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499201 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499259 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499347 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.499927 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.502140 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.542199 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5mzl\" (UniqueName: \"kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl\") pod \"ovnkube-node-f4swt\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.553625 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.593931 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.626832 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.630945 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.633189 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.643615 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.652669 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Dec 06 05:28:30 crc kubenswrapper[4958]: W1206 05:28:30.680068 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c75c3b8_96d9_442e_b3c4_92d10ad33929.slice/crio-d14ef3a0c7a1099f2b0390fd834cc2f2a26e2fa3879638b38a5750eb3a9f184d WatchSource:0}: Error finding container d14ef3a0c7a1099f2b0390fd834cc2f2a26e2fa3879638b38a5750eb3a9f184d: Status 404 returned error can't find the container with id d14ef3a0c7a1099f2b0390fd834cc2f2a26e2fa3879638b38a5750eb3a9f184d Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.691606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.733621 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.774336 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.810348 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.855961 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.893213 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.911180 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3" exitCode=0 Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.911270 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.911457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerStarted","Data":"6fc49784feb0c06872733e05e103f76007509641b9a003f7dce7c546060cfa97"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.913375 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9" exitCode=0 Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.913437 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.913579 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"d14ef3a0c7a1099f2b0390fd834cc2f2a26e2fa3879638b38a5750eb3a9f184d"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.915686 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerStarted","Data":"24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.915749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerStarted","Data":"73441dd0c715783a28d862804e57a3aa4be8c0081e56475ba541f539619feaed"} Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.935440 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:30 crc kubenswrapper[4958]: E1206 05:28:30.953067 4958 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:28:30 crc kubenswrapper[4958]: I1206 05:28:30.998413 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.003298 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.003434 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.003480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.003547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.003585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.003683 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.003744 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:33.003719058 +0000 UTC m=+23.537489831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004053 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:28:33.004043307 +0000 UTC m=+23.537814070 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004647 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004671 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004683 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004709 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:33.004700344 +0000 UTC m=+23.538471107 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004752 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004762 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004770 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.004790 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:33.004783076 +0000 UTC m=+23.538553839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.005101 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.005126 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:33.005118504 +0000 UTC m=+23.538889267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.031761 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.074988 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.115430 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.156643 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.195241 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.232883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.276246 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.315596 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.387642 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.412383 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.440873 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.474296 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.511623 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.555748 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.595437 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.646782 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.678962 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.715307 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.737541 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-bxxwq"] Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.737924 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.753166 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.761411 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.761438 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.761666 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.761791 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.761433 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.761891 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.766797 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.787346 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.807008 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.827307 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.860515 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.874319 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.877082 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.897622 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.913828 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/802ce14e-baaf-4d0d-87e3-3457209d8cda-host\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.913862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlz6h\" (UniqueName: \"kubernetes.io/projected/802ce14e-baaf-4d0d-87e3-3457209d8cda-kube-api-access-wlz6h\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.913900 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/802ce14e-baaf-4d0d-87e3-3457209d8cda-serviceca\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920751 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920801 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.920828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.922053 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c" exitCode=0 Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.922101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c"} Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.933126 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:31 crc kubenswrapper[4958]: E1206 05:28:31.953657 4958 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Dec 06 05:28:31 crc kubenswrapper[4958]: I1206 05:28:31.996574 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.014442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/802ce14e-baaf-4d0d-87e3-3457209d8cda-serviceca\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.014529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/802ce14e-baaf-4d0d-87e3-3457209d8cda-host\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.014554 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlz6h\" (UniqueName: \"kubernetes.io/projected/802ce14e-baaf-4d0d-87e3-3457209d8cda-kube-api-access-wlz6h\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.014614 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/802ce14e-baaf-4d0d-87e3-3457209d8cda-host\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.016642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/802ce14e-baaf-4d0d-87e3-3457209d8cda-serviceca\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.038338 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.062071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlz6h\" (UniqueName: \"kubernetes.io/projected/802ce14e-baaf-4d0d-87e3-3457209d8cda-kube-api-access-wlz6h\") pod \"node-ca-bxxwq\" (UID: \"802ce14e-baaf-4d0d-87e3-3457209d8cda\") " pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.098663 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.133803 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.173075 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.212939 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.253189 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.296052 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.334952 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.347661 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-bxxwq" Dec 06 05:28:32 crc kubenswrapper[4958]: W1206 05:28:32.362514 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802ce14e_baaf_4d0d_87e3_3457209d8cda.slice/crio-797eb13993709881d978be284432f3c1f4ece8ac84dd2069bc3032edb1d2149c WatchSource:0}: Error finding container 797eb13993709881d978be284432f3c1f4ece8ac84dd2069bc3032edb1d2149c: Status 404 returned error can't find the container with id 797eb13993709881d978be284432f3c1f4ece8ac84dd2069bc3032edb1d2149c Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.380292 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.429738 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.454631 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.509880 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.535835 4958 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.537643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.537683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.537696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.537807 4958 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.539689 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.587631 4958 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.588000 4958 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.589758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.589806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.589818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.589844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.589857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.604381 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.610151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.610208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.610227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.610250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.610270 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.612305 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.622963 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.627316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.627368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.627386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.627410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.627427 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.638794 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.644255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.644289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.644301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.644316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.644327 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.653193 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.653433 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.659583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.659641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.659654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.659671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.659680 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.668959 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: E1206 05:28:32.669228 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.670570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.670660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.670726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.670794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.670857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.692071 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.735090 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.773914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.773962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.773974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.773991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.774003 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.876758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.877024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.877137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.877222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.877311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.929296 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49" exitCode=0 Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.929394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.932491 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bxxwq" event={"ID":"802ce14e-baaf-4d0d-87e3-3457209d8cda","Type":"ContainerStarted","Data":"691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.932550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-bxxwq" event={"ID":"802ce14e-baaf-4d0d-87e3-3457209d8cda","Type":"ContainerStarted","Data":"797eb13993709881d978be284432f3c1f4ece8ac84dd2069bc3032edb1d2149c"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.947910 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.959463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.968226 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.979336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.979376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.979387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.979404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.979419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:32Z","lastTransitionTime":"2025-12-06T05:28:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:32 crc kubenswrapper[4958]: I1206 05:28:32.981957 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.001035 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.014240 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.023913 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.024111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.024162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.024205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.024251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024344 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:28:37.024317602 +0000 UTC m=+27.558088375 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024436 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024442 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024483 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024517 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024617 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024644 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024658 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024693 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024518 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:37.024502717 +0000 UTC m=+27.558273500 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024795 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:37.024771193 +0000 UTC m=+27.558542056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.024821 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:37.024810204 +0000 UTC m=+27.558581097 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.025010 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:37.024950628 +0000 UTC m=+27.558721461 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.027890 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.054525 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.083033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.083084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.083101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.083122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.083137 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.097145 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.140691 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.174669 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.185350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.185399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.185412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.185433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.185449 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.210976 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.251329 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.287318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.287365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.287377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.287394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.287406 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.291764 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.332859 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.376369 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.389957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.389996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.390007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.390024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.390036 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.413270 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.451578 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.492545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.492581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.492591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.492604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.492614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.495423 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.536063 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.575814 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.595336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.595375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.595384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.595397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.595408 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.628463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.661558 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.693052 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.697853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.697904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.697922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.697945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.697963 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.739459 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.761165 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.761344 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.761810 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.761861 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.761956 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:33 crc kubenswrapper[4958]: E1206 05:28:33.762063 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.776936 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.801061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.801117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.801132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.801152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.801168 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.821399 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.873799 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.893757 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.903134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.903179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.903194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.903220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.903234 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:33Z","lastTransitionTime":"2025-12-06T05:28:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.934854 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.938603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.940732 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab" exitCode=0 Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.940777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab"} Dec 06 05:28:33 crc kubenswrapper[4958]: I1206 05:28:33.975175 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.006075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.006116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.006127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.006144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.006164 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.014300 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.058596 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.092811 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.108276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.108335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.108355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.108379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.108396 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.131445 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.173383 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.211371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.211515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.211540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.211581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.211627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.215492 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.254205 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.302061 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.315893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.315949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.315963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.315980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.315992 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.333760 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.373188 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.413577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.418398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.418450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.418465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.418561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.418578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.452313 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.496013 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.521969 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.522002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.522010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.522024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.522033 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.538099 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.624840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.624908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.624926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.624948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.624965 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.728607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.728691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.728716 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.728746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.728770 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.832259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.832338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.832362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.832390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.832410 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.935890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.935940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.935951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.935968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.935980 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:34Z","lastTransitionTime":"2025-12-06T05:28:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.947979 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903" exitCode=0 Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.948035 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903"} Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.963501 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.975900 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.984490 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:34 crc kubenswrapper[4958]: I1206 05:28:34.996883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.008875 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.019977 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.034862 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.038844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.038875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.038884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.038900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.038910 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.053915 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.067318 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.078315 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.087404 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.094927 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.120718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.131427 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.141732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.141792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.141801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.141836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.141849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.142528 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.244891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.244929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.244938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.244952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.244961 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.347105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.347166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.347183 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.347209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.347231 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.449508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.449540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.449551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.449566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.449578 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.552621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.552682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.552700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.552724 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.552742 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.654637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.654677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.654687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.654701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.654712 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.756982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.757029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.757043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.757063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.757078 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.761613 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.761653 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.761677 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:35 crc kubenswrapper[4958]: E1206 05:28:35.761721 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:35 crc kubenswrapper[4958]: E1206 05:28:35.761834 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:35 crc kubenswrapper[4958]: E1206 05:28:35.761892 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.859800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.859871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.859884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.859903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.859914 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.954901 4958 generic.go:334] "Generic (PLEG): container finished" podID="48b90387-4ea9-47a3-8aa1-b8b19c5ed186" containerID="09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327" exitCode=0 Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.954945 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerDied","Data":"09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.962144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.962187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.962201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.962219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.962232 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:35Z","lastTransitionTime":"2025-12-06T05:28:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.971052 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:35 crc kubenswrapper[4958]: I1206 05:28:35.982738 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.010119 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.020950 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.029930 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.040800 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.050572 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.064025 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.065033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.065135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.065156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.065190 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.065211 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.076461 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.086544 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.098607 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.111092 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.119027 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.131272 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.144509 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.168329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.168362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.168373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.168391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.168404 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.271165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.271302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.271316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.271339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.271353 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.374098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.374178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.374196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.374223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.374241 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.477361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.477406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.477415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.477429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.477444 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.579530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.579570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.579583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.579600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.579613 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.687254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.687328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.687347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.687373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.687390 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.790159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.790221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.790243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.790272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.790294 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.893832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.893907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.893931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.893962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.893982 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.966335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.967671 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.974198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" event={"ID":"48b90387-4ea9-47a3-8aa1-b8b19c5ed186","Type":"ContainerStarted","Data":"ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544"} Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.996825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.996873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.996888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.996909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:36 crc kubenswrapper[4958]: I1206 05:28:36.996923 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:36Z","lastTransitionTime":"2025-12-06T05:28:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.003793 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.045530 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.052092 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.060750 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.060873 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.060910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.060963 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:28:45.060918968 +0000 UTC m=+35.594689731 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061048 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.061052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061070 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061152 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061204 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061103 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061225 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061273 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061201 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.061146 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061305 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:45.061273387 +0000 UTC m=+35.595044160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061338 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:45.061323268 +0000 UTC m=+35.595094291 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061391 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:45.061370359 +0000 UTC m=+35.595141122 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.061411 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:45.06140438 +0000 UTC m=+35.595175143 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.061395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.077266 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.090264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.099618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.099665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.099682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.099705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.099722 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.101163 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.114567 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.131286 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.142054 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.155741 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.167128 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.177888 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.191787 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.202320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.202380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.202394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.202413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.202425 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.204496 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.214952 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.235092 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.245984 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.258131 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.273708 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.285384 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.296828 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.304875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.304943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.304971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.305003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.305026 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.313594 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.331269 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.347116 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.363681 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.380053 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.404103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.408516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.408588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.408609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.408639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.408663 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.428892 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.447783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.463648 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.511280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.511337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.511356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.511381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.511399 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.614755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.614827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.614846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.614872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.614893 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.718350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.718819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.719075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.719154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.719230 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.762092 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.762154 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.762735 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.762155 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.763037 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:37 crc kubenswrapper[4958]: E1206 05:28:37.762943 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.822306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.822353 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.822366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.822388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.822401 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.924529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.924570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.924579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.924596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.924607 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:37Z","lastTransitionTime":"2025-12-06T05:28:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.977146 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.977613 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:37 crc kubenswrapper[4958]: I1206 05:28:37.997671 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.007461 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.014718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.026616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.026666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.026678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.026694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.026709 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.031066 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.043950 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.050164 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.060126 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.070289 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.081620 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.093645 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.103556 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.118749 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.128038 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.129844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.129907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.129930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.129960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.129988 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.136754 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.150571 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.171965 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.232639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.233057 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.233223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.233402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.233695 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.336643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.336709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.336732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.336761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.336790 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.438982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.439370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.439585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.439755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.439956 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.544368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.544448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.544506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.544540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.544563 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.648460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.648851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.649020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.649164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.649308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.751803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.751859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.751877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.751900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.751920 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.854583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.854665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.854681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.854697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.854708 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.956916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.956947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.956955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.956967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.956980 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:38Z","lastTransitionTime":"2025-12-06T05:28:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:38 crc kubenswrapper[4958]: I1206 05:28:38.979550 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.059968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.060043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.060064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.060088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.060106 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.163006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.163084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.163102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.163127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.163146 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.266530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.266597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.266614 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.266637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.266653 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.370314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.370392 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.370415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.370446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.370511 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.473323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.473436 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.473534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.473559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.473576 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.577075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.577153 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.577175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.577206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.577227 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.680723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.680798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.680811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.680834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.680849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.761801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.761801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.762135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:39 crc kubenswrapper[4958]: E1206 05:28:39.762644 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:39 crc kubenswrapper[4958]: E1206 05:28:39.762872 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:39 crc kubenswrapper[4958]: E1206 05:28:39.762674 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.775285 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.783603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.783668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.783692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.783723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.783746 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.795649 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.809721 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.833555 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.847255 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.862634 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.882972 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.887169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.887225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.887251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.887283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.887307 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.895445 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.906544 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.918075 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.931308 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.947625 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.955987 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.971333 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.981883 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.982454 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.989438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.989529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.989544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.989559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:39 crc kubenswrapper[4958]: I1206 05:28:39.989570 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:39Z","lastTransitionTime":"2025-12-06T05:28:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.055154 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.092537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.092627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.092655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.092688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.092710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.195342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.195405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.195423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.195449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.195467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.297976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.298037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.298061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.298092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.298115 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.400766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.401268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.401365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.401503 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.401605 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.505055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.505409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.505698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.505882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.506037 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.610233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.610641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.610776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.610914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.611048 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.714373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.714831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.714973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.715110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.715250 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.817364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.817561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.817674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.817767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.817869 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.921542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.921637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.921658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.921683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:40 crc kubenswrapper[4958]: I1206 05:28:40.921703 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:40Z","lastTransitionTime":"2025-12-06T05:28:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.025327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.025400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.025414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.025439 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.025456 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.129424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.129520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.129543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.129578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.129596 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.234087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.234167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.234177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.234202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.234214 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.344525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.344978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.345149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.345306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.345455 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.448743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.449189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.449369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.449625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.449818 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.554653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.555240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.555258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.555283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.555300 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.658043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.658120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.658131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.658152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.658165 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.760996 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:41 crc kubenswrapper[4958]: E1206 05:28:41.761125 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.761443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.761547 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.763825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.764162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.764246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.764268 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.761611 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:41 crc kubenswrapper[4958]: E1206 05:28:41.763972 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:41 crc kubenswrapper[4958]: E1206 05:28:41.764431 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.867270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.867397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.867419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.867445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.867466 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.970437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.970544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.970563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.970585 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:41 crc kubenswrapper[4958]: I1206 05:28:41.970601 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:41Z","lastTransitionTime":"2025-12-06T05:28:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.073681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.073761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.073795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.073821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.073841 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.177270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.177398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.177415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.177443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.177463 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.280430 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.280599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.280625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.280655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.280679 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.383349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.383419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.383443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.383514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.383547 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.486271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.486437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.486464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.486546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.486571 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.589245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.589329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.589348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.589375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.589399 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.634994 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk"] Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.636133 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.639266 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.639258 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.663780 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.663891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.663935 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.663967 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qktwz\" (UniqueName: \"kubernetes.io/projected/f69a7b70-83e2-4775-872c-cc414f3d08ef-kube-api-access-qktwz\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.674048 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691166 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691521 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.691543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.702909 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.716658 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.731723 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.741174 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.749595 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.760452 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.765320 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.765360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.765386 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qktwz\" (UniqueName: \"kubernetes.io/projected/f69a7b70-83e2-4775-872c-cc414f3d08ef-kube-api-access-qktwz\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.765451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.767266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-env-overrides\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.767338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.773288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/f69a7b70-83e2-4775-872c-cc414f3d08ef-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.775783 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.794251 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.795538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.795594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.795611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.795634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.795653 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.797830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qktwz\" (UniqueName: \"kubernetes.io/projected/f69a7b70-83e2-4775-872c-cc414f3d08ef-kube-api-access-qktwz\") pod \"ovnkube-control-plane-749d76644c-j25xk\" (UID: \"f69a7b70-83e2-4775-872c-cc414f3d08ef\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.806011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.806069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.806089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.806113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.806136 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.821713 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.824747 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.826735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.826787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.826803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.826823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.826838 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.843063 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.845171 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.848555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.848615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.848665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.848686 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.848701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.860999 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.863309 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.866548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.866616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.866642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.866672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.866697 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.880182 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.882116 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.887055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.887090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.887101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.887115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.887126 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.900337 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.903144 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: E1206 05:28:42.903815 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.905983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.906011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.906022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.906038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.906050 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:42Z","lastTransitionTime":"2025-12-06T05:28:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.913027 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.951212 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" Dec 06 05:28:42 crc kubenswrapper[4958]: W1206 05:28:42.970076 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf69a7b70_83e2_4775_872c_cc414f3d08ef.slice/crio-4106add08931697825ac9b31e7ccab5548763535d1e536bedb2f5fe9c52f7b41 WatchSource:0}: Error finding container 4106add08931697825ac9b31e7ccab5548763535d1e536bedb2f5fe9c52f7b41: Status 404 returned error can't find the container with id 4106add08931697825ac9b31e7ccab5548763535d1e536bedb2f5fe9c52f7b41 Dec 06 05:28:42 crc kubenswrapper[4958]: I1206 05:28:42.996082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" event={"ID":"f69a7b70-83e2-4775-872c-cc414f3d08ef","Type":"ContainerStarted","Data":"4106add08931697825ac9b31e7ccab5548763535d1e536bedb2f5fe9c52f7b41"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.008631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.008746 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.008772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.008804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.008829 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.111751 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.111792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.111806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.111823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.111834 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.214596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.214651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.214670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.214693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.214710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.317317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.317381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.317397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.317422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.317439 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.354116 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kb98t"] Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.354943 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.355132 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.370090 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.370606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pscv\" (UniqueName: \"kubernetes.io/projected/2c09fca2-7d91-412a-9814-64370d35b3e9-kube-api-access-2pscv\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.370709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.384562 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.402211 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.415630 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.419843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.419886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.419898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.419915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.419927 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.433892 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.447752 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.463984 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.474113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.474353 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pscv\" (UniqueName: \"kubernetes.io/projected/2c09fca2-7d91-412a-9814-64370d35b3e9-kube-api-access-2pscv\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.475040 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.475136 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:43.975101373 +0000 UTC m=+34.508872176 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.481117 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.490873 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.493233 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pscv\" (UniqueName: \"kubernetes.io/projected/2c09fca2-7d91-412a-9814-64370d35b3e9-kube-api-access-2pscv\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.501748 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.508595 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.523459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.523538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.523462 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.523552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.524696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.524790 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.546979 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.557650 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.566114 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.576631 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.586042 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.627925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.627974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.627985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.628003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.628015 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.730868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.730912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.730927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.730947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.730963 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.761418 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.761536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.761591 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.761704 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.761801 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.761922 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.834024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.834089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.834109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.834132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.834147 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.937290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.937354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.937369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.937386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.937408 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:43Z","lastTransitionTime":"2025-12-06T05:28:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:43 crc kubenswrapper[4958]: I1206 05:28:43.980402 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.980654 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:43 crc kubenswrapper[4958]: E1206 05:28:43.980764 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:44.980740516 +0000 UTC m=+35.514511299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.040055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.040161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.040173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.040190 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.040202 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.143216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.143256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.143267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.143281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.143290 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.246809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.246868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.246883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.246909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.246923 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.350103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.350159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.350171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.350193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.350212 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.453069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.453132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.453158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.453194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.453214 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.563393 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.563523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.563546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.563610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.563629 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.666565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.666607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.666624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.666644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.666659 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.768833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.768868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.768879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.768927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.768940 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.872855 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.872899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.872932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.872950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.872962 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.976731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.976785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.976800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.976819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.976830 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:44Z","lastTransitionTime":"2025-12-06T05:28:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:44 crc kubenswrapper[4958]: I1206 05:28:44.995424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:44 crc kubenswrapper[4958]: E1206 05:28:44.995669 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:44 crc kubenswrapper[4958]: E1206 05:28:44.995761 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:46.995738212 +0000 UTC m=+37.529508985 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.004628 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5mx5v" event={"ID":"4ab56cd4-7270-4252-b6e6-cbc102b84d97","Type":"ContainerStarted","Data":"16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.006670 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.006730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.009227 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" event={"ID":"f69a7b70-83e2-4775-872c-cc414f3d08ef","Type":"ContainerStarted","Data":"c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.009268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" event={"ID":"f69a7b70-83e2-4775-872c-cc414f3d08ef","Type":"ContainerStarted","Data":"3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.012707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.012747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.022235 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.041667 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.069449 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.079142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.079201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.079214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.079234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.079252 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.086825 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.095733 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.095869 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.095964 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096446 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:29:01.095950766 +0000 UTC m=+51.629721539 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096600 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:01.096583753 +0000 UTC m=+51.630354536 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096830 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096849 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096860 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.096925 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:01.096916181 +0000 UTC m=+51.630686944 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.097129 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.097398 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.097416 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.097424 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.097454 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:01.097446985 +0000 UTC m=+51.631217748 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.097219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.097681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.097846 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.098155 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:01.098120463 +0000 UTC m=+51.631891236 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.102843 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.120011 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.132463 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.159493 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.171792 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.181990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.182043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.182055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.182083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.182096 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.182933 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.194321 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.207010 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.216676 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.229281 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.246311 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.258039 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.268701 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.283776 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.286502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.286536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.286547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.286564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.286575 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.296602 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.310841 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.327959 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.349610 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.361957 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.372173 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.384331 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.389346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.389392 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.389402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.389415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.389425 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.405641 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.417685 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.428886 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.442629 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.456383 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.469668 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.480212 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498533 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.498534 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.515620 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:45Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.600827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.600881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.600891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.600904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.600913 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.704911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.704966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.704982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.705007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.705027 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.761148 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.761343 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.761437 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.761496 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.761510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.761755 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.761845 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:45 crc kubenswrapper[4958]: E1206 05:28:45.761985 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.807822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.807865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.807876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.807892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.807905 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.920162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.920417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.920429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.920446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:45 crc kubenswrapper[4958]: I1206 05:28:45.920458 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:45Z","lastTransitionTime":"2025-12-06T05:28:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.020180 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/0.log" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.022261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.022295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.022309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.022326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.022340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.023542 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2" exitCode=1 Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.023630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.024545 4958 scope.go:117] "RemoveContainer" containerID="46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.026686 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.034736 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.045719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.066941 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.077817 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.087416 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.106316 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.118739 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.124384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.124418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.124427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.124442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.124451 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.135225 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.150733 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.162051 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.182914 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.196007 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.208399 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.219545 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.226079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.226105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.226115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.226128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.226137 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.232023 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.249520 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"message\\\":\\\"5.871971 5972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872049 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 05:28:45.871840 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 05:28:45.872090 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 05:28:45.872119 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 05:28:45.872139 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 05:28:45.872156 5972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872320 5972 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872624 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.872836 5972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.873156 5972 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.258191 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.276404 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.288896 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.298834 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.310402 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.323962 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.328750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.328781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.328792 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.328808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.328820 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.337103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.348201 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.362705 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.378572 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.395231 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.412068 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.431377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.431415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.431425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.431441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.431453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.438849 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"message\\\":\\\"5.871971 5972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872049 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 05:28:45.871840 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 05:28:45.872090 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 05:28:45.872119 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 05:28:45.872139 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 05:28:45.872156 5972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872320 5972 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872624 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.872836 5972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.873156 5972 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.455019 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.480576 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.493207 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.502943 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.513731 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:46Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.533575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.533612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.533623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.533635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.533643 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.635898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.635944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.635957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.635974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.635985 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.737946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.737999 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.738012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.738032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.738044 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.840718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.840765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.840779 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.840811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.840827 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.944224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.944291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.944308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.944333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:46 crc kubenswrapper[4958]: I1206 05:28:46.944348 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:46Z","lastTransitionTime":"2025-12-06T05:28:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.028820 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.028966 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.029037 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:51.029019488 +0000 UTC m=+41.562790241 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.035108 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/0.log" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.039540 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.040110 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.046767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.046850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.046876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.046908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.046933 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.055010 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.065042 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.073175 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.090406 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.111021 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.124747 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.143169 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.149729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.149758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.149769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.149785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.149795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.167142 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.192792 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"message\\\":\\\"5.871971 5972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872049 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 05:28:45.871840 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 05:28:45.872090 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 05:28:45.872119 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 05:28:45.872139 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 05:28:45.872156 5972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872320 5972 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872624 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.872836 5972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.873156 5972 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.212592 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.233591 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.251432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.251455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.251463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.251497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.251506 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.255718 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.271043 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.301426 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.316165 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.327852 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.345664 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:47Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.353442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.353462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.353576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.353591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.353599 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.455209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.455236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.455245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.455258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.455282 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.557363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.557407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.557417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.557429 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.557437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.659803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.659830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.659855 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.659868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.659876 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.761097 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.761149 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.761222 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.761323 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.761316 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.761453 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.761754 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:47 crc kubenswrapper[4958]: E1206 05:28:47.761949 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.762566 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.762619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.762638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.762665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.762682 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.865509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.865565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.865583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.865607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.865625 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.968228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.968293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.968314 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.968342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:47 crc kubenswrapper[4958]: I1206 05:28:47.968360 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:47Z","lastTransitionTime":"2025-12-06T05:28:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.044449 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/1.log" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.044983 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/0.log" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.048381 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397" exitCode=1 Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.048449 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.048509 4958 scope.go:117] "RemoveContainer" containerID="46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.049166 4958 scope.go:117] "RemoveContainer" containerID="db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397" Dec 06 05:28:48 crc kubenswrapper[4958]: E1206 05:28:48.049312 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.067569 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.070863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.070895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.070909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.070926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.070941 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.081356 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.093606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.105437 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.117654 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.133176 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.148824 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.168601 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.174030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.174070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.174084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.174104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.174118 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.187566 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.202130 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.222820 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.250268 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"message\\\":\\\"5.871971 5972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872049 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 05:28:45.871840 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 05:28:45.872090 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 05:28:45.872119 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 05:28:45.872139 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 05:28:45.872156 5972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872320 5972 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872624 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.872836 5972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.873156 5972 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.262772 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.275623 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.278164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.278212 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.278229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.278252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.278269 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.294328 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.327034 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.349665 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:48Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.381959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.382033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.382055 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.382085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.382110 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.484756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.484834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.484862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.484891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.484910 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.587851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.588105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.588178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.588249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.588305 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.691409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.691690 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.691810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.691909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.691972 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.795448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.795530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.795549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.795571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.795587 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.897718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.897763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.897773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.897786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:48 crc kubenswrapper[4958]: I1206 05:28:48.897795 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:48Z","lastTransitionTime":"2025-12-06T05:28:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.000457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.000512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.000525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.000539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.000548 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.062866 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.066308 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/1.log" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.071889 4958 scope.go:117] "RemoveContainer" containerID="db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397" Dec 06 05:28:49 crc kubenswrapper[4958]: E1206 05:28:49.072129 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.102888 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.103018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.103096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.103122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.103150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.103167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.121878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.136399 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.152511 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.173117 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.191351 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.204091 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.206011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.206085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.206106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.206134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.206154 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.223417 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.240303 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.255530 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.271524 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.293282 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46f8c4a1bb980b8f4aed1959a29433c3e49207194ecb3a563a95441a99cceab2\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"message\\\":\\\"5.871971 5972 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872049 5972 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI1206 05:28:45.871840 5972 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI1206 05:28:45.872090 5972 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI1206 05:28:45.872119 5972 handler.go:208] Removed *v1.Pod event handler 6\\\\nI1206 05:28:45.872139 5972 handler.go:208] Removed *v1.Pod event handler 3\\\\nI1206 05:28:45.872156 5972 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872320 5972 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1206 05:28:45.872624 5972 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.872836 5972 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1206 05:28:45.873156 5972 factory.go:656] Stopping \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.307681 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.309207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.309241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.309253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.309270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.309281 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.324286 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.340939 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.355913 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.374919 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.387957 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.399267 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.413105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.413164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.413180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.413205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.413222 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.415765 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.440219 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.459727 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.476651 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.498388 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.515160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.515228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.515247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.515278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.515299 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.518400 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.534639 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.555661 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.576911 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.594728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.613340 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.618321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.618362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.618377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.618399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.618413 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.631144 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.657612 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.690664 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.707141 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.721447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.721555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.721575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.721602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.721621 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.761189 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.761196 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.761341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:49 crc kubenswrapper[4958]: E1206 05:28:49.761386 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.761196 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:49 crc kubenswrapper[4958]: E1206 05:28:49.761509 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:49 crc kubenswrapper[4958]: E1206 05:28:49.761594 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:49 crc kubenswrapper[4958]: E1206 05:28:49.761738 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.780508 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.796155 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.810329 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.824064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.824112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.824128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.824149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.824166 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.853520 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.883641 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.898975 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.918258 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.926592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.926637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.926649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.926667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.926678 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:49Z","lastTransitionTime":"2025-12-06T05:28:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.937078 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.958073 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.973018 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.982623 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:49 crc kubenswrapper[4958]: I1206 05:28:49.992983 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.001390 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.023250 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.028701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.028725 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.028733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.028745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.028755 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.037957 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.047031 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.057651 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.131275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.131315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.131327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.131343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.131357 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.236443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.236557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.236572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.236591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.236606 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.339028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.339063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.339072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.339088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.339101 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.441442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.441555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.441580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.441609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.441632 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.544095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.544169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.544194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.544223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.544247 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.646512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.646564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.646580 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.646599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.646613 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.749101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.749143 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.749156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.749173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.749185 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.852137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.852220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.852241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.852267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.852286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.955218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.955263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.955274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.955289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:50 crc kubenswrapper[4958]: I1206 05:28:50.955299 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:50Z","lastTransitionTime":"2025-12-06T05:28:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.058790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.058886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.058909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.058941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.058963 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.071443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.071680 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.071806 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:28:59.071772305 +0000 UTC m=+49.605543108 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.161859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.161930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.161955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.161971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.161982 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.264296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.264339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.264349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.264362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.264371 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.371089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.371137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.371155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.371175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.371186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.473691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.473749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.473766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.473789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.473806 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.576224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.576303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.576327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.576356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.576379 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.679086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.679131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.679141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.679157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.679169 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.760994 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.761055 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.761122 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.761184 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.761205 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.761353 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.761668 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:51 crc kubenswrapper[4958]: E1206 05:28:51.761761 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.781063 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.781098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.781108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.781120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.781129 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.886794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.886831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.886839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.886850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.886858 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.989281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.989355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.989372 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.989399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:51 crc kubenswrapper[4958]: I1206 05:28:51.989417 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:51Z","lastTransitionTime":"2025-12-06T05:28:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.092193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.092250 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.092269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.092292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.092310 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.194638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.194704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.194721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.194747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.194765 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.297202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.297249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.297261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.297276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.297287 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.400103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.400186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.400211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.400240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.400263 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.503750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.503812 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.503829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.503849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.503864 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.607309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.607519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.607552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.607584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.607607 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.710815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.710890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.710910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.710942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.710998 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.814086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.814135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.814147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.814166 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.814178 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.916516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.916557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.916570 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.916586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:52 crc kubenswrapper[4958]: I1206 05:28:52.916597 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:52Z","lastTransitionTime":"2025-12-06T05:28:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.018967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.019067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.019086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.019147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.019166 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.122366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.122449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.122535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.122582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.122605 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.223930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.223989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.224011 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.224042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.224062 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.243976 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:53Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.249931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.249980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.249992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.250010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.250022 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.264457 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:53Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.268282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.268351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.268369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.268395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.268412 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.288860 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:53Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.293180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.293238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.293253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.293276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.293292 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.310254 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:53Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.314732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.314789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.314806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.314871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.314889 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.329008 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:53Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.329148 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.330505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.330542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.330554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.330571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.330582 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.433862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.433923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.433947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.433979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.434005 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.537367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.537464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.537513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.537536 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.537553 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.641789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.641875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.641898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.641923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.641945 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.744963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.745028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.745052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.745082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.745107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.762209 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.765183 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.765273 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.765698 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.766043 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.766088 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.766370 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:53 crc kubenswrapper[4958]: E1206 05:28:53.766589 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.847892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.847941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.847957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.847980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.847997 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.951003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.951049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.951060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.951077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:53 crc kubenswrapper[4958]: I1206 05:28:53.951090 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:53Z","lastTransitionTime":"2025-12-06T05:28:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.054044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.054080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.054093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.054142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.054158 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.157670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.157747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.157771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.157804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.157828 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.260935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.260983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.261006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.261036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.261064 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.364631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.364695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.364712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.364741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.364758 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.467793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.467858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.467877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.467901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.467917 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.571033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.571112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.571137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.571168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.571192 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.673710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.673764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.673775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.673791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.673801 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.776073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.776115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.776128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.776143 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.776152 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.879224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.879289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.879300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.879320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.879331 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.982282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.982348 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.982365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.982390 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:54 crc kubenswrapper[4958]: I1206 05:28:54.982412 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:54Z","lastTransitionTime":"2025-12-06T05:28:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.084499 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.084537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.084547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.084562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.084572 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.187530 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.187610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.187635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.187665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.187687 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.290308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.290368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.290385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.290409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.290426 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.393405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.393509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.393531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.393557 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.393614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.496967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.497049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.497070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.497095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.497114 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.600031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.600113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.600133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.600158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.600175 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.703178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.703245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.703261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.703281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.703295 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.761339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.761462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:55 crc kubenswrapper[4958]: E1206 05:28:55.761615 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.761638 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.761665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:55 crc kubenswrapper[4958]: E1206 05:28:55.761820 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:55 crc kubenswrapper[4958]: E1206 05:28:55.761984 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:55 crc kubenswrapper[4958]: E1206 05:28:55.762169 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.805827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.805861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.805874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.805889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.805900 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.908795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.908934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.908956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.908984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:55 crc kubenswrapper[4958]: I1206 05:28:55.909043 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:55Z","lastTransitionTime":"2025-12-06T05:28:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.012141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.012214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.012232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.012257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.012278 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.115663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.115806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.115829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.115856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.115878 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.218725 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.218764 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.218776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.218790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.218800 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.321919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.321984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.322005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.322034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.322055 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.425298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.425345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.425358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.425377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.425390 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.528088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.528135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.528148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.528168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.528181 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.631091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.631439 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.631620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.631828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.631958 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.735297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.735702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.735980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.736135 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.736259 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.839665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.839756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.839779 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.839811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.839834 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.943151 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.943211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.943281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.943317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:56 crc kubenswrapper[4958]: I1206 05:28:56.943336 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:56Z","lastTransitionTime":"2025-12-06T05:28:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.046259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.046320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.046338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.046364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.046385 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.149830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.149891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.149916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.149943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.149962 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.253460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.253561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.253579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.253605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.253622 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.356425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.356533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.356572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.356603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.356636 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.458934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.458985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.459004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.459022 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.459035 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.560556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.560602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.560613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.560630 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.560644 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.663667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.663729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.663747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.663774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.663791 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.761721 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.761849 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:57 crc kubenswrapper[4958]: E1206 05:28:57.761931 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.761975 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.761859 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:57 crc kubenswrapper[4958]: E1206 05:28:57.762103 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:57 crc kubenswrapper[4958]: E1206 05:28:57.762275 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:57 crc kubenswrapper[4958]: E1206 05:28:57.762415 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.766925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.767091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.767131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.767222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.767298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.871196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.871282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.871302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.871330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.871349 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.974421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.974507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.974526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.974540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:57 crc kubenswrapper[4958]: I1206 05:28:57.974549 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:57Z","lastTransitionTime":"2025-12-06T05:28:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.077175 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.077220 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.077235 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.077251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.077264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.179848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.179925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.179985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.180015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.180040 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.282896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.282948 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.282963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.282987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.283026 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.345414 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.362295 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.374739 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.387347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.387426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.387445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.387513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.387536 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.391642 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.403395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.412898 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.426675 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.456330 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.466362 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.475738 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.484465 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.489466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.489556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.489575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.489598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.489614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.504773 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.518901 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.534455 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.557386 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.576674 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.590633 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.591808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.591937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.592056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.592146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.592232 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.612661 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.632520 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:58Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.694820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.694891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.694911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.694951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.694968 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.797788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.797852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.797876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.797905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.797927 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.900589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.900644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.900661 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.900684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:58 crc kubenswrapper[4958]: I1206 05:28:58.900701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:58Z","lastTransitionTime":"2025-12-06T05:28:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.002953 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.003017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.003036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.003059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.003078 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.105440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.105470 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.105518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.105534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.105543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.165273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.165453 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.165615 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:15.165593119 +0000 UTC m=+65.699363922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.207972 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.208044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.208070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.208100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.208124 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.310787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.310868 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.310891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.310921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.310943 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.414180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.414234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.414246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.414264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.414276 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.517017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.517108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.517231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.517767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.517831 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.621587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.621672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.621698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.621735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.621759 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.725037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.725090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.725104 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.725123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.725135 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.761067 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.761111 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.761620 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.761236 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.761722 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.761196 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.761631 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:28:59 crc kubenswrapper[4958]: E1206 05:28:59.761833 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.776126 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.800431 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.816086 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.827410 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.828513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.828743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.829050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.829226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.829380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.842513 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.854513 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.865633 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.874134 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.890772 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.902872 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.920989 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.932684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.933004 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.933109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.933200 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.933283 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:28:59Z","lastTransitionTime":"2025-12-06T05:28:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.941810 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.958264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.973358 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:28:59 crc kubenswrapper[4958]: I1206 05:28:59.993852 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:28:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.005592 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.020076 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.032080 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.035821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.035844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.035852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.035872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.035881 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.138612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.138670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.138682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.138700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.138714 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.242101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.242146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.242159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.242174 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.242186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.344710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.344784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.344808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.344836 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.344857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.446837 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.446912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.446935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.446965 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.446988 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.549838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.549935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.549952 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.549981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.550004 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.653976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.654036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.654059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.654093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.654116 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.757543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.757586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.757603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.757627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.757645 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.860186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.860246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.860265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.860290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.860308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.963231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.963306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.963330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.963362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:00 crc kubenswrapper[4958]: I1206 05:29:00.963385 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:00Z","lastTransitionTime":"2025-12-06T05:29:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.066610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.066651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.066659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.066674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.066684 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.168278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.168316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.168326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.168338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.168347 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.184344 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.184551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184634 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:29:33.18459969 +0000 UTC m=+83.718370503 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.184700 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184753 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.184778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184788 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184880 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.184872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184923 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:33.184912048 +0000 UTC m=+83.718682811 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184823 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185011 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185030 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185101 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:33.185081863 +0000 UTC m=+83.718852666 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185037 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185163 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:33.185150394 +0000 UTC m=+83.718921187 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.184853 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.185207 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:33.185197386 +0000 UTC m=+83.718968179 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.271440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.271554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.271581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.271629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.271650 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.375060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.375112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.375130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.375154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.375170 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.478895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.478943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.478966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.478989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.479006 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.582265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.582342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.582363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.582387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.582405 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.685910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.686037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.686114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.686145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.686221 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.761876 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.762059 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.762412 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.762451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.762671 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.762846 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.763054 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.763106 4958 scope.go:117] "RemoveContainer" containerID="db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397" Dec 06 05:29:01 crc kubenswrapper[4958]: E1206 05:29:01.763218 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.788519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.788563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.788575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.788591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.788603 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.891374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.891745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.891762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.891780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.891792 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.994603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.994645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.994656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.994673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:01 crc kubenswrapper[4958]: I1206 05:29:01.994686 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:01Z","lastTransitionTime":"2025-12-06T05:29:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.096823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.096874 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.096889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.096912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.096928 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.118685 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/1.log" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.122441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.123886 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.138906 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.157728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.171151 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.187770 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.199202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.199228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.199237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.199251 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.199259 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.207536 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.224456 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.240561 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.256024 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.284913 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.340293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.340350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.340367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.340391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.340410 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.350665 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.370634 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.383981 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.399807 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.410569 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.420343 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.432001 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.444014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.444070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.444091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.444117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.444138 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.447727 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.459177 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.546502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.546564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.546576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.546591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.546599 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.649138 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.649186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.649197 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.649219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.649231 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.751790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.751826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.751841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.751861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.751877 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.853607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.853672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.853694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.853719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.853740 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.955818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.955885 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.955902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.955926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:02 crc kubenswrapper[4958]: I1206 05:29:02.955944 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:02Z","lastTransitionTime":"2025-12-06T05:29:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.058423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.058491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.058505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.058523 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.058535 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.161160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.161194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.161204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.161219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.161229 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.264303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.264354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.264375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.264401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.264419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.367454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.367560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.367583 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.367612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.367643 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.470500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.470556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.470575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.470597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.470614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.500452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.500538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.500558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.500581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.500600 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.526947 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:03Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.534062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.534122 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.534228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.534295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.534316 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.555456 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:03Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.560456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.560555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.560579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.560606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.560628 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.581100 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:03Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.586344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.586588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.586727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.586889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.587023 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.608175 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:03Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.613832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.613901 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.613926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.613956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.613977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.638565 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:03Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.638804 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.641376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.641440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.641458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.641537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.641564 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.744655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.744728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.744752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.744781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.744804 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.761004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.761117 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.761204 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.761355 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.761509 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.761612 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.761774 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:03 crc kubenswrapper[4958]: E1206 05:29:03.761877 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.848312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.848377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.848396 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.848422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.848440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.951627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.951694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.951715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.951741 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:03 crc kubenswrapper[4958]: I1206 05:29:03.951762 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:03Z","lastTransitionTime":"2025-12-06T05:29:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.054146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.054182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.054192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.054203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.054211 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.130871 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/2.log" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.131850 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/1.log" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.134706 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db" exitCode=1 Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.134756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.134789 4958 scope.go:117] "RemoveContainer" containerID="db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.135610 4958 scope.go:117] "RemoveContainer" containerID="52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db" Dec 06 05:29:04 crc kubenswrapper[4958]: E1206 05:29:04.135836 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.148988 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.156807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.156863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.156875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.156892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.156902 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.164084 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.175331 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.187311 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.199063 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.212582 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.237010 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.259919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.259962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.259973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.259991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.260004 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.261510 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.281427 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.292173 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.303583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.312784 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.321863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.338521 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.354825 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.362148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.362181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.362191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.362206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.362217 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.365911 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.378147 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.391071 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:04Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.465207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.465252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.465263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.465279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.465290 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.568244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.568325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.568345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.568374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.568394 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.671274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.671353 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.671371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.671398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.671440 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.774921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.775016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.775039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.775072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.775094 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.878313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.878398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.878421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.878449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.878513 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.981529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.981591 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.981612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.981638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:04 crc kubenswrapper[4958]: I1206 05:29:04.981654 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:04Z","lastTransitionTime":"2025-12-06T05:29:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.084945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.085017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.085036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.085062 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.085079 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.141359 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/2.log" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.188456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.188532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.188550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.188573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.188590 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.298100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.298182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.298197 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.298554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.298600 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.402621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.402697 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.402721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.402752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.402770 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.506532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.506594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.506606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.506628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.506644 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.609563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.609622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.609640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.609665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.609684 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.712322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.712386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.712403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.712426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.712443 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.761314 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.761405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.761443 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.761409 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:05 crc kubenswrapper[4958]: E1206 05:29:05.761577 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:05 crc kubenswrapper[4958]: E1206 05:29:05.761957 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:05 crc kubenswrapper[4958]: E1206 05:29:05.762237 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:05 crc kubenswrapper[4958]: E1206 05:29:05.762336 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.815007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.815045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.815053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.815067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.815078 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.916848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.916890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.916899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.916913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:05 crc kubenswrapper[4958]: I1206 05:29:05.916926 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:05Z","lastTransitionTime":"2025-12-06T05:29:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.020093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.020163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.020186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.020221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.020246 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.122710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.122807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.122832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.122863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.122884 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.225241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.225312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.225323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.225343 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.225356 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.328844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.328913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.328931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.328955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.328971 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.431869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.431942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.431964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.431992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.432018 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.534815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.534865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.534877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.534895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.534905 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.638586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.638670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.638684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.638709 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.638728 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.741824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.741883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.741893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.741913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.741926 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.846618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.846818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.846842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.846880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.846911 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.951168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.951223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.951241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.951266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:06 crc kubenswrapper[4958]: I1206 05:29:06.951286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:06Z","lastTransitionTime":"2025-12-06T05:29:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.054650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.054748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.054777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.054802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.054820 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.158087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.158158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.158182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.158211 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.158233 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.261354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.261400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.261416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.261437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.261453 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.364351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.364384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.364392 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.364404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.364413 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.468075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.468181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.468206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.468252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.468278 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.571219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.571266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.571279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.571295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.571308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.674656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.674730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.674753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.674784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.674811 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.761904 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.761964 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.762037 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.761899 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:07 crc kubenswrapper[4958]: E1206 05:29:07.762107 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:07 crc kubenswrapper[4958]: E1206 05:29:07.762253 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:07 crc kubenswrapper[4958]: E1206 05:29:07.762530 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:07 crc kubenswrapper[4958]: E1206 05:29:07.762602 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.777535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.777592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.777609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.777634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.777651 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.880805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.880879 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.880895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.880919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.880935 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.983721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.983800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.983813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.983827 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:07 crc kubenswrapper[4958]: I1206 05:29:07.983838 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:07Z","lastTransitionTime":"2025-12-06T05:29:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.086569 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.086618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.086632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.086650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.086662 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.191017 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.191255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.191687 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.192086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.192158 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.295031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.295082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.295099 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.295125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.295138 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.398522 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.398594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.398612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.398640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.398694 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.501539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.501598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.501609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.501627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.501641 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.604780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.604839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.604852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.604876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.604891 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.708544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.708605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.708635 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.708653 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.708663 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.812652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.812742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.812767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.812797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.812888 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.915831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.915878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.915890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.915913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:08 crc kubenswrapper[4958]: I1206 05:29:08.915928 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:08Z","lastTransitionTime":"2025-12-06T05:29:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.018586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.018649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.018663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.018685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.018703 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.123682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.123802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.123867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.123900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.124070 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.226918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.226967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.226984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.227002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.227014 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.329047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.329084 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.329095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.329108 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.329120 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.433397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.433445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.433460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.433502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.433521 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.536551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.536597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.536609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.536627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.536639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.638871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.638927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.638943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.638966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.638982 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.742373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.742425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.742437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.742456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.742490 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.761382 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.761446 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.761386 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.761534 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:09 crc kubenswrapper[4958]: E1206 05:29:09.761540 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:09 crc kubenswrapper[4958]: E1206 05:29:09.761653 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:09 crc kubenswrapper[4958]: E1206 05:29:09.761801 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:09 crc kubenswrapper[4958]: E1206 05:29:09.761868 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.780719 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.800834 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.815956 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.835984 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.844516 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.844549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.844563 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.844581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.844593 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.853807 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.870944 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.886004 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.903878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.920878 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.936456 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.947292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.947358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.947371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.947388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.947400 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:09Z","lastTransitionTime":"2025-12-06T05:29:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.951987 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.966867 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:09 crc kubenswrapper[4958]: I1206 05:29:09.988088 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:09Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.016018 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://db6efff3f2e029827d40e9e04671f6c8548165f0cc1b89261b7e826956c1b397\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:28:47Z\\\",\\\"message\\\":\\\" Retry object setup: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902286 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf\\\\nI1206 05:28:46.902303 6355 ovn.go:134] Ensuring zone local for Pod openshift-network-diagnostics/network-check-source-55646444c4-trplf in node crc\\\\nI1206 05:28:46.902324 6355 obj_retry.go:303] Retry object setup: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902340 6355 obj_retry.go:365] Adding new object: *v1.Pod openshift-etcd/etcd-crc\\\\nI1206 05:28:46.902344 6355 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-source-55646444c4-trplf] creating logical port openshift-network-diagnostics_network-check-source-55646444c4-trplf for pod on switch crc\\\\nI1206 05:28:46.902355 6355 ovn.go:134] Ensuring zone local for Pod openshift-etcd/etcd-crc in node crc\\\\nI1206 05:28:46.902367 6355 obj_retry.go:386] Retry successful for *v1.Pod openshift-etcd/etcd-crc after 0 failed attempt(s)\\\\nI1206 05:28:46.902377 6355 default_network_controller.go:776] Recording success event on pod openshift-etcd/etcd-crc\\\\nF1206 05:28:46.902429 6355 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:10Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.030863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:10Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.048964 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:10Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.050264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.050318 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.050330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.050351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.050367 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.066353 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:10Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.077532 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:10Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.154144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.154213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.154233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.154261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.154281 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.257646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.257703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.257719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.257742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.257760 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.360768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.361148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.361386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.361628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.361814 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.464613 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.464713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.464731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.464754 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.464771 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.567914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.567982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.568036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.568060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.568078 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.671164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.671199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.671210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.671228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.671244 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.773939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.774299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.774612 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.774842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.775042 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.878095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.878170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.878192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.878227 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.878249 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.981546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.981624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.981648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.981677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:10 crc kubenswrapper[4958]: I1206 05:29:10.981700 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:10Z","lastTransitionTime":"2025-12-06T05:29:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.084192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.084269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.084293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.084329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.084353 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.187223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.187278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.187294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.187316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.187333 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.293625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.293740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.293765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.293805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.293826 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.396526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.396559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.396568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.396587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.396596 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.499090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.499149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.499165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.499185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.499201 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.602671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.602738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.602749 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.602771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.602785 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.705865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.705950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.705971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.705996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.706013 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.762049 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:11 crc kubenswrapper[4958]: E1206 05:29:11.762186 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.762218 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:11 crc kubenswrapper[4958]: E1206 05:29:11.762391 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.762049 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.762563 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:11 crc kubenswrapper[4958]: E1206 05:29:11.762620 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:11 crc kubenswrapper[4958]: E1206 05:29:11.762666 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.809345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.809405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.809414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.809432 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.809456 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.912573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.912629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.912647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.912672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:11 crc kubenswrapper[4958]: I1206 05:29:11.912691 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:11Z","lastTransitionTime":"2025-12-06T05:29:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.015428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.015502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.015519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.015541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.015557 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.118893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.118994 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.119015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.119072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.119092 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.224571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.224621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.224640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.224667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.224688 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.327750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.327800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.327813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.327830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.327843 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.430813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.430896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.430923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.430955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.430978 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.533185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.533236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.533249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.533268 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.533282 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.637520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.637579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.637596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.637619 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.637635 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.740900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.740955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.740971 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.740993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.741004 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.844295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.844336 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.844347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.844367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.844378 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.947725 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.947781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.947797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.947813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:12 crc kubenswrapper[4958]: I1206 05:29:12.947824 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:12Z","lastTransitionTime":"2025-12-06T05:29:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.050018 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.050082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.050094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.050116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.050131 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.153383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.153427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.153437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.153453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.153462 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.256719 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.256791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.256808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.256832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.256849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.359447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.359491 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.359500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.359513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.359521 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.462075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.462118 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.462136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.462158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.462175 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.565906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.565962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.565984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.566007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.566023 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.668645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.668692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.668708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.668728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.668743 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.761371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.761531 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.761803 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.761869 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.761911 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.761965 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.762427 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.762636 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.766915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.766964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.767012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.767037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.767057 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.789576 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:13Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.794169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.794307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.794384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.794458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.794558 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.807015 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:13Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.811315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.811370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.811383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.811400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.811412 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.833103 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:13Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.837555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.837601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.837620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.837644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.837667 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.855871 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:13Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.860042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.860092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.860111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.860133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.860150 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.872259 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:13Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:13Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:13 crc kubenswrapper[4958]: E1206 05:29:13.872399 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.873955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.874005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.874026 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.874047 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.874063 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.977209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.977452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.977565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.977645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:13 crc kubenswrapper[4958]: I1206 05:29:13.977704 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:13Z","lastTransitionTime":"2025-12-06T05:29:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.079743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.080288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.080378 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.080494 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.080592 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.183913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.183942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.183951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.183966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.183976 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.286962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.286995 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.287005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.287021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.287032 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.389167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.389196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.389203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.389216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.389225 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.491921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.491960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.491972 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.491990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.492001 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.594797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.594865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.594887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.594917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.594943 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.698148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.698502 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.698692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.698889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.699077 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.761862 4958 scope.go:117] "RemoveContainer" containerID="52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db" Dec 06 05:29:14 crc kubenswrapper[4958]: E1206 05:29:14.762189 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.777400 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.788955 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.797226 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.801964 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.802019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.802072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.802101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.802122 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.808664 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.821791 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.834673 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.851331 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.862866 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.877669 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.889678 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.903808 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.904756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.904798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.904807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.904822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.904830 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:14Z","lastTransitionTime":"2025-12-06T05:29:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.916151 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.928013 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.938697 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.954377 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.969293 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.981870 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:14 crc kubenswrapper[4958]: I1206 05:29:14.994103 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:14Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.006762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.006796 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.006805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.006819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.006830 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.109654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.109721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.109739 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.109769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.109787 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.178314 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.178447 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.178804 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:29:47.178789788 +0000 UTC m=+97.712560551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.213012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.213192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.213259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.213329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.213419 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.316411 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.316455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.316486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.316505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.316517 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.419160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.419204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.419216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.419233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.419247 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.522515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.522539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.522548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.522558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.522566 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.625655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.626128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.626341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.626565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.626751 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.729882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.729920 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.729931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.729949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.729962 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.761648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.761851 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.762160 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.762261 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.762504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.762604 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.762888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:15 crc kubenswrapper[4958]: E1206 05:29:15.763008 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.833990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.834024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.834032 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.834046 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.834054 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.936676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.936736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.936748 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.936767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:15 crc kubenswrapper[4958]: I1206 05:29:15.936780 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:15Z","lastTransitionTime":"2025-12-06T05:29:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.038882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.039243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.039252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.039266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.039276 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.142034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.142085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.142102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.142124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.142141 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.183758 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/0.log" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.183828 4958 generic.go:334] "Generic (PLEG): container finished" podID="fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7" containerID="24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42" exitCode=1 Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.183867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerDied","Data":"24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.184714 4958 scope.go:117] "RemoveContainer" containerID="24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.201645 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.212380 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.236750 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.246003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.246065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.246085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.246109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.246126 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.268444 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.280177 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.297407 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.309138 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.321649 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.339016 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.347880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.347917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.347927 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.347942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.347951 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.353269 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.362411 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.375389 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.389701 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.403250 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.413090 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.429436 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:16Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.442285 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.449909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.449958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.449967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.449983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.449993 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.453864 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:16Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.553270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.553317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.553333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.553351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.553365 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.656660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.656714 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.656735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.656799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.656821 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.759497 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.759546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.759559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.759577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.759593 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.861782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.861823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.861833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.861849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.861860 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.964517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.964565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.964577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.964592 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:16 crc kubenswrapper[4958]: I1206 05:29:16.964619 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:16Z","lastTransitionTime":"2025-12-06T05:29:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.067159 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.067199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.067209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.067226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.067235 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.169763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.169798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.169808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.169821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.169830 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.188669 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/0.log" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.188743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerStarted","Data":"5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.208166 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.226609 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.239913 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.260621 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.272288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.272564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.272683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.272785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.272878 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.289890 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.304996 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.323590 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.341121 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.356972 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.370025 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.375260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.375290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.375299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.375316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.375326 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.399393 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.420376 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.436989 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.454119 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.468868 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.477370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.477611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.477736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.477875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.477978 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.484733 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.502640 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.519916 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:17Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.580795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.580853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.580861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.580877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.580888 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.684125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.684191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.684214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.684239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.684258 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.761099 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.761227 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:17 crc kubenswrapper[4958]: E1206 05:29:17.761313 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.761268 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.761125 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:17 crc kubenswrapper[4958]: E1206 05:29:17.761543 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:17 crc kubenswrapper[4958]: E1206 05:29:17.761584 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:17 crc kubenswrapper[4958]: E1206 05:29:17.761662 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.787307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.787360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.787375 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.787394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.787408 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.890729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.890795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.890813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.890838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.890857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.993917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.993979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.993997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.994024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:17 crc kubenswrapper[4958]: I1206 05:29:17.994046 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:17Z","lastTransitionTime":"2025-12-06T05:29:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.097484 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.097532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.097544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.097562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.097575 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.199992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.200073 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.200093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.200126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.200147 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.302514 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.302551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.302561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.302575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.302586 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.406184 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.406231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.406246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.406266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.406280 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.509643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.509698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.509717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.509740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.509758 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.613137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.613201 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.613226 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.613258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.613281 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.715973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.716016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.716027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.716044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.716055 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.819203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.819263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.819284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.819310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.819327 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.921681 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.921772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.921800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.921834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:18 crc kubenswrapper[4958]: I1206 05:29:18.921857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:18Z","lastTransitionTime":"2025-12-06T05:29:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.028975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.029064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.029110 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.029145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.029177 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.132847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.132922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.132947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.132975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.132992 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.236197 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.236267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.236286 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.236313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.236334 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.339628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.339691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.339721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.339763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.339789 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.443556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.443638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.443666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.443695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.443717 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.546241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.546292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.546310 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.546334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.546351 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.649367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.649425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.649443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.649495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.649514 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.752700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.752790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.752817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.752852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.752882 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.761127 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.761199 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.761241 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:19 crc kubenswrapper[4958]: E1206 05:29:19.761271 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.761354 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:19 crc kubenswrapper[4958]: E1206 05:29:19.761553 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:19 crc kubenswrapper[4958]: E1206 05:29:19.761854 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:19 crc kubenswrapper[4958]: E1206 05:29:19.761996 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.780882 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.794467 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.810061 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.821493 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.841133 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.855513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.855564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.855579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.855599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.855614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.858040 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.870877 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.890543 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.905841 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.920373 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.940159 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.951801 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.958035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.958072 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.958085 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.958103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.958114 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:19Z","lastTransitionTime":"2025-12-06T05:29:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.963332 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.974762 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:19 crc kubenswrapper[4958]: I1206 05:29:19.995440 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:19Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.019977 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:20Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.032606 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:20Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.042999 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:20Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.063291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.063338 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.063350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.063369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.063380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.166531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.166602 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.166620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.167045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.167107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.269929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.270550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.270677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.270782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.270863 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.374075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.374366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.374541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.374658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.374770 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.476998 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.477054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.477074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.477098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.477114 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.580427 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.580732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.580822 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.580910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.580996 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.683778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.684165 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.684462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.684682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.684844 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.775512 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.787813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.787970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.788109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.788246 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.788378 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.891089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.891358 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.891605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.891788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.891965 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.994784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.995028 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.995130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.995230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:20 crc kubenswrapper[4958]: I1206 05:29:20.995316 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:20Z","lastTransitionTime":"2025-12-06T05:29:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.098280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.098588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.098725 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.098854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.098976 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.201027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.201264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.201365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.201540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.201680 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.305064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.305705 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.305828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.305963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.306145 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.408815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.409058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.409152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.409242 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.409325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.511756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.511916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.512012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.512101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.512176 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.615445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.615556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.615572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.615600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.615616 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.718121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.718222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.718247 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.718281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.718311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.761430 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.761537 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:21 crc kubenswrapper[4958]: E1206 05:29:21.761688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.761771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.761786 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:21 crc kubenswrapper[4958]: E1206 05:29:21.761935 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:21 crc kubenswrapper[4958]: E1206 05:29:21.762138 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:21 crc kubenswrapper[4958]: E1206 05:29:21.762297 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.821344 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.821421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.821437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.821457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.821485 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.924489 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.924540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.924645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.924689 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:21 crc kubenswrapper[4958]: I1206 05:29:21.924702 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:21Z","lastTransitionTime":"2025-12-06T05:29:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.027399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.027461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.027506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.027534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.027553 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.132454 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.132531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.132542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.132558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.132568 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.235643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.235743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.235795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.235832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.235884 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.339805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.339891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.339921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.339958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.339984 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.450992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.451038 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.451048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.451066 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.451079 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.554435 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.554493 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.554507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.554526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.554538 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.657846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.658409 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.658799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.659128 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.659325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.765144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.765219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.765234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.765257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.765271 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.868003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.868064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.868082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.868107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.868125 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.971406 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.971495 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.971517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.971540 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:22 crc kubenswrapper[4958]: I1206 05:29:22.971557 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:22Z","lastTransitionTime":"2025-12-06T05:29:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.074216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.074255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.074265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.074281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.074290 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.176861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.176926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.176943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.176967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.176989 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.279652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.279761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.279783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.279810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.279828 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.383140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.383240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.383259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.383284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.383306 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.486862 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.486933 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.486951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.486977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.486996 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.589824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.589894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.589912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.589940 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.589967 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.693067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.693132 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.693155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.693187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.693209 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.761071 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.761117 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.761232 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:23 crc kubenswrapper[4958]: E1206 05:29:23.761382 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.761403 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:23 crc kubenswrapper[4958]: E1206 05:29:23.761536 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:23 crc kubenswrapper[4958]: E1206 05:29:23.761644 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:23 crc kubenswrapper[4958]: E1206 05:29:23.762002 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.795818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.795864 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.795881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.795903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.795919 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.898147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.898236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.898261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.898300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:23 crc kubenswrapper[4958]: I1206 05:29:23.898325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:23Z","lastTransitionTime":"2025-12-06T05:29:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.001588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.001658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.001676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.001704 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.001732 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.033806 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.033850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.033867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.033884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.033896 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.053481 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:24Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.057033 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.057070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.057081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.057095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.057106 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.072779 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:24Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.076938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.077080 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.077125 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.077160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.077190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.091979 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:24Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.096596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.096683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.096773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.096848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.096871 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.120858 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:24Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.125992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.126049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.126059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.126075 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.126101 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.146216 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:24Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:24 crc kubenswrapper[4958]: E1206 05:29:24.146367 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.148478 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.148511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.148618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.148642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.148653 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.251431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.251674 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.251691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.251729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.251760 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.354809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.354880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.354899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.354924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.354940 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.458555 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.458622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.458643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.458673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.458694 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.561594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.561680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.561701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.561732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.561758 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.665279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.665391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.665424 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.665505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.665529 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.768905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.769012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.769036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.769059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.769075 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.872234 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.872280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.872292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.872312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.872325 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.975625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.975688 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.975701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.975722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:24 crc kubenswrapper[4958]: I1206 05:29:24.975736 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:24Z","lastTransitionTime":"2025-12-06T05:29:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.078710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.078778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.078797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.078817 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.078830 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.181843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.181909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.181932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.181957 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.181975 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.284924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.284988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.285007 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.285036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.285055 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.388694 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.388785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.388810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.388840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.388862 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.491394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.491447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.491462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.491509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.491524 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.595461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.595576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.595601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.595632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.595656 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.698981 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.699034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.699045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.699066 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.699079 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.762075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.762124 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.762153 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.762247 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:25 crc kubenswrapper[4958]: E1206 05:29:25.762357 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:25 crc kubenswrapper[4958]: E1206 05:29:25.762454 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:25 crc kubenswrapper[4958]: E1206 05:29:25.762638 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:25 crc kubenswrapper[4958]: E1206 05:29:25.762797 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.801838 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.801916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.801935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.801959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.801977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.905645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.905750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.905772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.905798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:25 crc kubenswrapper[4958]: I1206 05:29:25.905816 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:25Z","lastTransitionTime":"2025-12-06T05:29:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.009030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.009090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.009106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.009133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.009155 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.112894 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.112943 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.112960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.112982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.113000 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.216214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.216278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.216302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.216334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.216358 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.319778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.319869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.319888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.319912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.319929 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.423016 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.423155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.423274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.423301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.423319 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.526269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.526366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.526418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.526450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.526506 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.630759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.630818 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.630835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.630858 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.630874 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.734505 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.734565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.734584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.734608 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.734627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.837652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.837718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.837743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.837767 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.837785 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.941068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.941137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.941163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.941196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:26 crc kubenswrapper[4958]: I1206 05:29:26.941217 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:26Z","lastTransitionTime":"2025-12-06T05:29:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.043886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.043944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.043963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.043987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.044009 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.147271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.147399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.147431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.147463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.147527 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.250034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.250131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.250149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.250170 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.250189 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.353916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.353979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.353997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.354023 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.354041 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.457413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.457520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.457550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.457581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.457606 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.560942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.561034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.561054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.561082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.561099 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.664163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.664245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.664270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.664304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.664328 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.761078 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.761078 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.761095 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.761291 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:27 crc kubenswrapper[4958]: E1206 05:29:27.761580 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:27 crc kubenswrapper[4958]: E1206 05:29:27.761890 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:27 crc kubenswrapper[4958]: E1206 05:29:27.762034 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:27 crc kubenswrapper[4958]: E1206 05:29:27.762184 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.766826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.766871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.766881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.766897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.766910 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.869463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.869606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.869623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.869642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.869655 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.972638 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.972684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.972696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.972713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:27 crc kubenswrapper[4958]: I1206 05:29:27.972725 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:27Z","lastTransitionTime":"2025-12-06T05:29:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.075715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.075781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.075799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.075824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.075843 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.179056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.179101 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.179117 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.179140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.179156 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.282371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.282440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.282463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.282531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.282559 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.385142 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.385202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.385216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.385238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.385271 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.488024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.488098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.488123 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.488153 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.488174 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.591001 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.591051 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.591064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.591079 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.591093 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.693552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.693600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.693611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.693627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.693639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.796680 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.796766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.796793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.796826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.796849 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.899823 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.899881 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.899898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.899922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:28 crc kubenswrapper[4958]: I1206 05:29:28.899942 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:28Z","lastTransitionTime":"2025-12-06T05:29:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.003244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.003305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.003320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.003342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.003363 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.106290 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.106339 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.106351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.106369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.106380 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.210060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.210139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.210186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.210221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.210245 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.313313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.313385 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.313408 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.313438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.313458 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.416870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.416928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.416946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.416970 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.416988 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.520670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.521279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.521628 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.522156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.522682 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.626506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.626573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.626584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.626607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.626621 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.730119 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.730180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.730199 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.730223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.730242 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.761302 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.761341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.761526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:29 crc kubenswrapper[4958]: E1206 05:29:29.761774 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.762060 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:29 crc kubenswrapper[4958]: E1206 05:29:29.762355 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:29 crc kubenswrapper[4958]: E1206 05:29:29.763003 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:29 crc kubenswrapper[4958]: E1206 05:29:29.763366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.763662 4958 scope.go:117] "RemoveContainer" containerID="52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.783745 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.799228 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.822728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.834989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.835634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.835912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.836089 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.836584 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.859607 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.875733 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.894192 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.915735 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.935678 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.939942 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.940010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.940036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.940067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.940091 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:29Z","lastTransitionTime":"2025-12-06T05:29:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.954649 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:29 crc kubenswrapper[4958]: I1206 05:29:29.988247 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:29Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.005219 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.017734 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.034700 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.042187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.042266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.042293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.042327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.042352 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.053758 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.072527 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.084153 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.103728 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.126003 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146237 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146285 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.146842 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.246875 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/2.log" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.248755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.249027 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.249106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.249191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.249276 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.251174 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.251917 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.268332 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.290826 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.305400 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.314440 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.331335 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.344450 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.352465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.352517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.352529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.352545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.352557 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.363088 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.378258 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.400823 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.413794 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.426234 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.441364 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.455252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.455297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.455309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.455330 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.455342 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.457517 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.474231 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.505808 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.524773 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.537088 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.549216 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.556891 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.556935 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.556947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.556963 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.556977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.559859 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:30Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.659210 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.659278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.659301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.659331 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.659350 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.761141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.761182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.761195 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.761213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.761225 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.863622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.863967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.864010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.864037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.864056 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.966872 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.967010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.967031 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.967056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:30 crc kubenswrapper[4958]: I1206 05:29:30.967074 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:30Z","lastTransitionTime":"2025-12-06T05:29:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.070395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.070458 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.070515 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.070549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.070574 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.173520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.173577 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.173597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.173623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.173642 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.259251 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.260955 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/2.log" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.266559 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" exitCode=1 Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.266577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.266640 4958 scope.go:117] "RemoveContainer" containerID="52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.268920 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:29:31 crc kubenswrapper[4958]: E1206 05:29:31.270161 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.276422 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.276463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.276519 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.276547 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.276565 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.289984 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.305027 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.321307 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.357458 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.374515 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.379898 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.379958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.379977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.380002 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.380034 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.388395 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.405533 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.422257 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.444451 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.467568 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.483252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.483325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.483350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.483379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.483399 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.487188 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.503110 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.521087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.540178 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.558187 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.582037 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.586459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.586559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.586582 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.586606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.586623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.616002 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://52ffe1b33745a706abdd0f69fb10d2eab95f9526c0206d20c04b1e4c098295db\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:03Z\\\",\\\"message\\\":\\\"t:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9154,TargetPort:{1 0 metrics},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{dns.operator.openshift.io/daemonset-dns: default,},ClusterIP:10.217.4.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.217.4.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}\\\\nF1206 05:29:02.740208 6561 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": fa\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:01Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.636837 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.651855 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:31Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.689283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.689333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.689352 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.689374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.689391 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.761505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.761536 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.761529 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:31 crc kubenswrapper[4958]: E1206 05:29:31.761707 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.761747 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:31 crc kubenswrapper[4958]: E1206 05:29:31.761922 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:31 crc kubenswrapper[4958]: E1206 05:29:31.762027 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:31 crc kubenswrapper[4958]: E1206 05:29:31.762190 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.792206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.792253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.792270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.792291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.792308 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.894989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.895049 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.895068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.895094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.895113 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.998030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.998414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.998604 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.998775 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:31 crc kubenswrapper[4958]: I1206 05:29:31.998915 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:31Z","lastTransitionTime":"2025-12-06T05:29:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.102670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.102743 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.102761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.102786 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.102805 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.206362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.206421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.206438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.206463 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.206513 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.273214 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.278774 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:29:32 crc kubenswrapper[4958]: E1206 05:29:32.279020 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.299830 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.309811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.309870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.309887 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.309913 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.309932 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.325417 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.344594 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.369032 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.402800 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.413543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.413609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.413629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.413658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.413676 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.420577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.441208 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.463863 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.481944 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.499989 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.516770 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.516825 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.516844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.516871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.516888 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.539257 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.560821 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.578072 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.601192 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621231 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621310 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.621969 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.642346 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.658781 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.682454 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.704987 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:32Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.724417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.724729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.724869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.725009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.725150 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.828888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.828966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.828985 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.829012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.829033 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.933008 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.933074 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.933097 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.933126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:32 crc kubenswrapper[4958]: I1206 05:29:32.933148 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:32Z","lastTransitionTime":"2025-12-06T05:29:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.036708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.036753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.036765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.036782 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.036796 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.140066 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.140162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.140180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.140204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.140225 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.186190 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186372 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:37.18632809 +0000 UTC m=+147.720098893 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.186444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.186561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.186641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.186725 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186768 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186813 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186834 4958 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186834 4958 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186881 4958 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186912 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:37.186885794 +0000 UTC m=+147.720656587 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186829 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186940 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:37.186927995 +0000 UTC m=+147.720698788 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186974 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:37.186952066 +0000 UTC m=+147.720722939 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.186986 4958 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.187016 4958 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.187110 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:37.1870821 +0000 UTC m=+147.720852953 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.243109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.243254 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.243287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.243320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.243343 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.346025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.346090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.346107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.346133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.346150 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.449360 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.449423 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.449441 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.449466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.449512 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.552656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.552740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.552789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.552814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.552842 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.655631 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.655726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.655777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.655797 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.655813 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.758410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.758498 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.758517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.758542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.758560 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.761317 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.761386 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.761595 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.761329 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.761662 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.761764 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.761956 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:33 crc kubenswrapper[4958]: E1206 05:29:33.762132 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.862058 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.862118 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.862137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.862163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.862182 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.965685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.965756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.965780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.965809 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:33 crc kubenswrapper[4958]: I1206 05:29:33.965834 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:33Z","lastTransitionTime":"2025-12-06T05:29:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.068905 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.068946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.068960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.068977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.068988 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.171840 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.171909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.171921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.171938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.171950 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.274660 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.274733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.274747 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.274773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.274799 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.280854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.280878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.280890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.280906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.280917 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.300317 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:34Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.305221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.305294 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.305313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.305340 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.305359 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.324956 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:34Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.329646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.329730 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.329755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.329787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.329809 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.346671 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:34Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.354354 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.354410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.354456 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.354646 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.354710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.371054 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:34Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.375065 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.375136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.375152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.375193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.375206 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.391442 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:34Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:34 crc kubenswrapper[4958]: E1206 05:29:34.391698 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.393648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.393699 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.393712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.393731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.393744 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.496673 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.496974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.497143 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.497272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.497387 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.601292 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.601389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.601414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.601450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.601511 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.704980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.705030 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.705043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.705060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.705069 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.808187 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.808270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.808297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.808329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.808351 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.910671 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.911070 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.911207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.911342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:34 crc kubenswrapper[4958]: I1206 05:29:34.911551 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:34Z","lastTransitionTime":"2025-12-06T05:29:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.014168 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.014203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.014214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.014229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.014238 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.116745 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.116814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.116829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.116851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.116865 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.219669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.219738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.219757 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.219784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.219806 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.322467 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.322560 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.322586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.322620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.322646 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.425189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.425241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.425259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.425284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.425305 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.529324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.529395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.529421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.529449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.529504 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.632713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.632805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.632824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.632851 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.632867 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.736448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.736588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.736658 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.736693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.736716 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.761455 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.761578 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.761740 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.761795 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:35 crc kubenswrapper[4958]: E1206 05:29:35.761929 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:35 crc kubenswrapper[4958]: E1206 05:29:35.762106 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:35 crc kubenswrapper[4958]: E1206 05:29:35.762321 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:35 crc kubenswrapper[4958]: E1206 05:29:35.762426 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.840371 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.840434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.840452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.840504 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.840521 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.943777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.943834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.943852 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.943876 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:35 crc kubenswrapper[4958]: I1206 05:29:35.943892 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:35Z","lastTransitionTime":"2025-12-06T05:29:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.046486 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.046539 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.046556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.046576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.046592 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.149588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.149636 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.149645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.149659 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.149669 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.252663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.253083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.253106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.253133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.253151 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.356685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.356753 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.356773 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.356807 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.356824 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.460194 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.460260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.460279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.460306 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.460323 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.563762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.563877 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.563895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.563921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.563940 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.667208 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.667307 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.667326 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.667351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.667372 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.770511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.770596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.770623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.770652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.770675 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.875805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.875856 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.875869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.875886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.875904 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.979721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.979789 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.979813 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.979845 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:36 crc kubenswrapper[4958]: I1206 05:29:36.979883 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:36Z","lastTransitionTime":"2025-12-06T05:29:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.082465 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.082584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.082603 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.082629 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.082647 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.185955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.186020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.186043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.186068 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.186089 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.289589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.289651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.289667 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.289692 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.289711 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.393627 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.393707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.393732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.393757 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.393775 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.497280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.497610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.497832 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.498009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.498199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.601544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.601623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.601652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.601679 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.601701 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.705395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.705453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.705527 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.705576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.705596 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.762027 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:37 crc kubenswrapper[4958]: E1206 05:29:37.762246 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.762447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.762447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:37 crc kubenswrapper[4958]: E1206 05:29:37.762674 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.762697 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:37 crc kubenswrapper[4958]: E1206 05:29:37.762799 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:37 crc kubenswrapper[4958]: E1206 05:29:37.762904 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.808398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.808452 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.808507 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.808538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.808561 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.913453 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.913561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.913579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.913605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:37 crc kubenswrapper[4958]: I1206 05:29:37.913619 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:37Z","lastTransitionTime":"2025-12-06T05:29:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.016182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.016412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.016722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.016893 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.017034 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.120721 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.120844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.120867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.120895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.120917 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.224685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.224744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.224761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.224785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.224802 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.328185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.328252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.328275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.328303 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.328324 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.431509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.431576 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.431595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.431618 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.431637 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.534691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.534740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.534750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.534766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.534778 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.637313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.637361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.637376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.637399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.637413 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.740597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.740655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.740672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.740696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.740713 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.845005 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.845094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.845115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.845141 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.845159 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.949077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.949148 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.949171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.949205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:38 crc kubenswrapper[4958]: I1206 05:29:38.949227 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:38Z","lastTransitionTime":"2025-12-06T05:29:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.052600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.052771 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.052803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.052835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.052865 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.156305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.156370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.156388 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.156413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.156432 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.259625 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.259695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.259735 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.259765 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.259787 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.363794 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.363860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.363882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.363912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.363932 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.466533 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.466601 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.466622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.466650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.466672 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.569788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.569849 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.569871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.569895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.569913 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.672829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.672899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.672919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.672945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.672963 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.761873 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.761982 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:39 crc kubenswrapper[4958]: E1206 05:29:39.762029 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:39 crc kubenswrapper[4958]: E1206 05:29:39.762170 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.762227 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:39 crc kubenswrapper[4958]: E1206 05:29:39.762795 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.762888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:39 crc kubenswrapper[4958]: E1206 05:29:39.763062 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.774902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.774966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.774988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.775013 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.775032 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.788845 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.810921 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.830050 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.856264 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.877922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.877989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.878010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.878034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.878053 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.886598 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.900241 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.918822 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.941942 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.958386 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.973359 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:39Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.980139 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.980414 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.980654 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.980960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:39 crc kubenswrapper[4958]: I1206 05:29:39.981165 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:39Z","lastTransitionTime":"2025-12-06T05:29:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.004336 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.023525 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.039147 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.058833 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.077668 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.083442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.083549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.083575 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.083605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.083627 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.093900 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.110263 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.130433 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.147951 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:40Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.185565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.185621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.185641 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.185734 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.185754 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.288366 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.288437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.288462 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.288520 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.288541 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.390802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.390865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.390889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.390919 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.390942 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.493651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.493723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.493740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.494120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.494164 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.597224 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.597274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.597287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.597304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.597316 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.699548 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.699600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.699616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.699639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.699656 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.802351 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.802400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.802419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.802444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.802464 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.905835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.905924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.905950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.905984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:40 crc kubenswrapper[4958]: I1206 05:29:40.906009 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:40Z","lastTransitionTime":"2025-12-06T05:29:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.009300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.009370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.009391 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.009417 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.009434 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.112955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.113036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.113061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.113098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.113119 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.216859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.216941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.216967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.217000 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.217026 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.319541 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.319620 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.319642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.319677 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.319700 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.423204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.423278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.423296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.423321 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.423340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.526977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.527081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.527109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.527179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.527206 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.630831 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.630912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.630934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.630961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.630980 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.734262 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.734319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.734335 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.734355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.734371 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.761340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.761358 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.761519 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.761554 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:41 crc kubenswrapper[4958]: E1206 05:29:41.761671 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:41 crc kubenswrapper[4958]: E1206 05:29:41.761737 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:41 crc kubenswrapper[4958]: E1206 05:29:41.761882 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:41 crc kubenswrapper[4958]: E1206 05:29:41.762035 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.837828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.837878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.837895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.837917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.837934 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.941543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.941589 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.941598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.941615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:41 crc kubenswrapper[4958]: I1206 05:29:41.941626 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:41Z","lastTransitionTime":"2025-12-06T05:29:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.050012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.050077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.050096 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.050120 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.050137 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.153039 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.153113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.153169 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.153198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.153217 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.255861 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.255937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.255954 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.255982 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.256000 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.358917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.358991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.359012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.359037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.359056 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.462113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.462180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.462198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.462228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.462245 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.565733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.565780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.565793 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.565810 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.565822 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.669213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.669279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.669302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.669329 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.669352 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.771758 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.771790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.771799 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.771811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.771821 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.875115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.875188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.875206 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.875236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.875254 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.978091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.978131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.978140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.978153 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:42 crc kubenswrapper[4958]: I1206 05:29:42.978163 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:42Z","lastTransitionTime":"2025-12-06T05:29:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.081727 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.081769 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.081780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.081816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.081828 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.185389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.185447 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.185461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.185512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.185528 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.288691 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.288756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.288780 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.288808 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.288857 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.391325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.391410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.391421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.391440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.391451 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.494157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.494203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.494444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.494617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.494628 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.599191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.599253 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.599267 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.599288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.599302 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.701883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.701934 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.701944 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.701959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.701969 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.761207 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.761252 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.761297 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:43 crc kubenswrapper[4958]: E1206 05:29:43.761324 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.761313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:43 crc kubenswrapper[4958]: E1206 05:29:43.761461 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:43 crc kubenswrapper[4958]: E1206 05:29:43.761519 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:43 crc kubenswrapper[4958]: E1206 05:29:43.761573 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.805193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.805230 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.805240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.805255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.805264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.908610 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.908662 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.908675 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.908695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:43 crc kubenswrapper[4958]: I1206 05:29:43.908710 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:43Z","lastTransitionTime":"2025-12-06T05:29:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.010316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.010368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.010386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.010419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.010437 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.114442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.114518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.114535 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.114559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.114576 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.217732 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.217788 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.217805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.217826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.217843 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.320511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.320564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.320579 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.320599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.320615 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.423698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.423756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.423768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.423787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.423800 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.526042 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.526083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.526093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.526106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.526116 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.608293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.608684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.608820 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.608926 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.609016 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.622398 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:44Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.626048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.626088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.626100 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.626115 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.626124 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.639423 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:44Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.643525 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.643594 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.643607 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.643626 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.643639 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.657283 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:44Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.660676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.660731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.660742 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.660755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.660781 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.671863 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:44Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.675413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.675459 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.675496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.675511 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.675522 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.687341 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:44Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:44 crc kubenswrapper[4958]: E1206 05:29:44.687590 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.689198 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.689249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.689263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.689278 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.689289 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.791280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.791437 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.791551 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.791643 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.791736 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.893644 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.893676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.893708 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.893722 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.893733 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.996324 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.996363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.996373 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.996389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:44 crc kubenswrapper[4958]: I1206 05:29:44.996397 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:44Z","lastTransitionTime":"2025-12-06T05:29:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.098546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.098578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.098586 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.098600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.098611 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.200455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.200508 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.200518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.200534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.200543 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.303791 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.303841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.303850 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.303867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.303877 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.407024 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.407093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.407112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.407136 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.407153 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.510191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.510244 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.510257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.510280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.510295 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.612531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.612599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.612621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.612649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.612669 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.715048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.715087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.715098 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.715113 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.715123 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.761091 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:45 crc kubenswrapper[4958]: E1206 05:29:45.761452 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.761154 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:45 crc kubenswrapper[4958]: E1206 05:29:45.761865 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.761096 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:45 crc kubenswrapper[4958]: E1206 05:29:45.762160 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.761294 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:45 crc kubenswrapper[4958]: E1206 05:29:45.762429 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.818962 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.819021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.819034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.819054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.819068 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.922152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.922670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.922777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.922866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:45 crc kubenswrapper[4958]: I1206 05:29:45.922945 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:45Z","lastTransitionTime":"2025-12-06T05:29:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.025214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.025550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.025683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.025772 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.025842 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.129445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.129776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.129908 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.129988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.130070 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.232518 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.232559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.232571 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.232590 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.232603 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.335737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.335774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.335785 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.335800 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.335812 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.438550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.438867 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.438990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.439126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.439264 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.542811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.542909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.542929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.542990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.543034 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.645787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.645853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.645871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.645895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.645912 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.748615 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.749045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.749297 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.749545 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.749752 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.853682 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.854048 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.854236 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.854416 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.854622 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.957865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.957968 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.957989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.958012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:46 crc kubenswrapper[4958]: I1206 05:29:46.958030 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:46Z","lastTransitionTime":"2025-12-06T05:29:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.060911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.060979 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.061003 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.061036 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.061060 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.164146 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.164233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.164257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.164291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.164313 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.251273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.251580 4958 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.252550 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs podName:2c09fca2-7d91-412a-9814-64370d35b3e9 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:51.252518778 +0000 UTC m=+161.786289581 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs") pod "network-metrics-daemon-kb98t" (UID: "2c09fca2-7d91-412a-9814-64370d35b3e9") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.267178 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.267239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.267261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.267285 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.267304 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.370280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.370350 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.370369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.370397 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.370417 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.474951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.475369 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.475869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.476384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.476887 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.580975 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.581044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.581067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.581092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.581111 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.684904 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.685245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.685543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.685728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.685884 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.761285 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.761529 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.761705 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.761734 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.762284 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.762572 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.762698 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.762960 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.763738 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:47 crc kubenswrapper[4958]: E1206 05:29:47.764115 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.789317 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.789717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.789882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.790045 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.790181 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.893814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.893931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.893960 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.893992 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.894017 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.997308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.997404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.997425 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.997457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:47 crc kubenswrapper[4958]: I1206 05:29:47.997534 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:47Z","lastTransitionTime":"2025-12-06T05:29:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.100842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.100897 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.100910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.100961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.100976 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.204562 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.204621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.204640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.204668 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.204688 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.308512 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.309738 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.309902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.310124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.310291 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.414124 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.414242 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.414298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.414323 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.414372 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.517305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.517362 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.517376 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.517396 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.517413 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.620996 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.621050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.621067 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.621091 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.621109 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.724164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.724225 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.724243 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.724266 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.724285 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.827756 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.827843 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.827883 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.827914 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.827937 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.931228 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.931272 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.931285 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.931300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:48 crc kubenswrapper[4958]: I1206 05:29:48.931311 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:48Z","lastTransitionTime":"2025-12-06T05:29:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.035404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.035578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.035600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.035624 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.035681 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.139245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.139308 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.139325 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.139349 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.139366 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.243044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.243114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.243137 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.243167 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.243192 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.345713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.345784 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.345811 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.345844 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.345868 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.450177 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.450242 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.450261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.450288 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.450307 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.553581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.553652 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.553670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.553696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.553717 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.655790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.655834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.655847 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.655863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.655874 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.758873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.758932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.758951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.758974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.758995 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.761455 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.761466 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.761533 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:49 crc kubenswrapper[4958]: E1206 05:29:49.761619 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.761651 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:49 crc kubenswrapper[4958]: E1206 05:29:49.761764 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:49 crc kubenswrapper[4958]: E1206 05:29:49.762131 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:49 crc kubenswrapper[4958]: E1206 05:29:49.762257 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.781536 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.804946 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.838723 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.862640 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.864598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.864648 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.864663 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.864695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.864712 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.877787 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.894416 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.911198 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.924216 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.936087 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.955635 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.966584 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.967215 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.967241 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.967255 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.967273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.967285 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:49Z","lastTransitionTime":"2025-12-06T05:29:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.976904 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:49 crc kubenswrapper[4958]: I1206 05:29:49.987905 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.000397 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:49Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.010928 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.019412 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.035909 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.047200 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.057299 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:50Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.068958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.069010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.069021 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.069050 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.069059 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.171783 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.171842 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.171860 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.171884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.171902 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.273651 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.273685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.273695 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.273710 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.273722 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.376370 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.376426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.376444 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.376509 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.376534 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.479293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.479355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.479374 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.479398 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.479416 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.582640 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.582693 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.582711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.582737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.582753 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.685983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.686051 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.686069 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.686094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.686114 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.788833 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.788910 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.788947 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.788978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.789000 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.891455 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.891564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.891587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.891621 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.891655 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.994103 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.994186 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.994197 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.994213 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:50 crc kubenswrapper[4958]: I1206 05:29:50.994227 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:50Z","lastTransitionTime":"2025-12-06T05:29:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.097573 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.097684 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.097703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.098150 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.098191 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.201826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.201918 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.201937 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.201959 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.201976 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.305192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.305609 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.305826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.305987 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.306147 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.410182 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.410239 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.410258 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.410280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.410298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.513401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.513513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.513538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.513568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.513589 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.617152 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.617282 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.617309 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.617345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.617370 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.720092 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.720129 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.720140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.720156 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.720167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.762062 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.762121 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:51 crc kubenswrapper[4958]: E1206 05:29:51.762246 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.762062 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:51 crc kubenswrapper[4958]: E1206 05:29:51.762392 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.762448 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:51 crc kubenswrapper[4958]: E1206 05:29:51.762595 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:51 crc kubenswrapper[4958]: E1206 05:29:51.762694 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.822559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.822605 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.822616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.822639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.822656 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.925431 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.925506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.925528 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.925546 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:51 crc kubenswrapper[4958]: I1206 05:29:51.925557 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:51Z","lastTransitionTime":"2025-12-06T05:29:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.028203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.028280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.028299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.028322 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.028340 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.131407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.131466 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.131517 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.131544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.131560 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.234805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.234875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.234899 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.234928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.234949 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.338678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.338736 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.338752 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.338777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.338796 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.442565 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.442642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.442666 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.442700 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.442721 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.546457 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.546556 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.546600 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.546645 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.546673 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.649974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.650040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.650059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.650087 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.650107 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.753696 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.753761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.753777 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.753803 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.753823 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.857706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.857759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.857778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.857802 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.857819 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.960669 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.960750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.960774 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.960805 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:52 crc kubenswrapper[4958]: I1206 05:29:52.960826 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:52Z","lastTransitionTime":"2025-12-06T05:29:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.063596 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.063642 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.063656 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.063672 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.063684 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.166978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.167056 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.167076 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.167102 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.167121 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.270664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.270761 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.270781 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.270814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.270833 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.374173 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.374238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.374257 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.374283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.374303 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.476983 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.477029 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.477044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.477059 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.477068 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.580639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.580713 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.580733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.580759 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.580797 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.684086 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.684140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.684155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.684176 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.684190 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.762840 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.762894 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.762856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.762856 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:53 crc kubenswrapper[4958]: E1206 05:29:53.763072 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:53 crc kubenswrapper[4958]: E1206 05:29:53.763232 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:53 crc kubenswrapper[4958]: E1206 05:29:53.763358 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:53 crc kubenswrapper[4958]: E1206 05:29:53.763509 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.786617 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.786683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.786702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.786728 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.786747 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.890766 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.890846 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.890869 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.890896 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.890914 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.993355 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.993395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.993405 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.993418 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:53 crc kubenswrapper[4958]: I1206 05:29:53.993427 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:53Z","lastTransitionTime":"2025-12-06T05:29:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.095616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.095701 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.095716 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.095737 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.095749 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.199185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.199256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.199274 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.199300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.199319 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.302328 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.302400 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.302419 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.302449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.302518 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.404381 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.404428 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.404440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.404460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.404489 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.507283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.507383 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.507407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.507526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.507553 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.610980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.611043 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.611064 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.611090 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.611108 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.714320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.714402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.714426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.714461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.714527 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.818061 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.818106 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.818116 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.818133 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.818142 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.876443 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.876537 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.876558 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.876581 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.876598 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.896720 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:54Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.900140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.900192 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.900204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.900222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.900234 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.920184 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:54Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.925223 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.925298 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.925320 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.925347 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.925368 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.941263 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:54Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.946299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.946345 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.946359 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.946377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.946403 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.964722 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:54Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.968685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.968720 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.968729 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.968744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.968753 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.983908 4958 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"3ae601d9-1e3a-4939-b6d5-fbef7be2f380\\\",\\\"systemUUID\\\":\\\"d8c60597-c05a-4627-8199-844ddb77ec1c\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:54Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:54 crc kubenswrapper[4958]: E1206 05:29:54.984051 4958 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.986144 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.986180 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.986191 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.986204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:54 crc kubenswrapper[4958]: I1206 05:29:54.986213 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:54Z","lastTransitionTime":"2025-12-06T05:29:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.089526 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.089611 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.089632 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.089839 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.089854 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.192859 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.192930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.192949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.192977 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.192996 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.295157 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.295221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.295238 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.295263 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.295280 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.398040 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.398121 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.398131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.398145 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.398156 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.501217 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.501260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.501269 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.501283 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.501295 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.604204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.604249 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.604260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.604276 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.604286 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.706450 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.706531 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.706542 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.706584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.706599 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.761894 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:55 crc kubenswrapper[4958]: E1206 05:29:55.762019 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.762208 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:55 crc kubenswrapper[4958]: E1206 05:29:55.762261 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.762365 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:55 crc kubenswrapper[4958]: E1206 05:29:55.762407 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.762620 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:55 crc kubenswrapper[4958]: E1206 05:29:55.762675 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.809921 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.809974 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.809991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.810014 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.810030 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.912884 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.912925 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.912938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.912956 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:55 crc kubenswrapper[4958]: I1206 05:29:55.912968 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:55Z","lastTransitionTime":"2025-12-06T05:29:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.015296 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.015356 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.015367 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.015382 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.015392 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.117333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.117386 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.117403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.117426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.117444 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.220363 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.220407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.220420 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.220445 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.220458 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.323341 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.323395 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.323413 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.323438 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.323459 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.425676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.425763 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.425790 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.425824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.425850 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.529035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.529081 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.529095 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.529114 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.529130 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.632185 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.632233 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.632245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.632265 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.632277 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.735313 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.735399 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.735415 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.735433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.735446 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.838020 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.838109 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.838126 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.838153 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.838167 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.940965 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.941315 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.941433 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.941650 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:56 crc kubenswrapper[4958]: I1206 05:29:56.941801 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:56Z","lastTransitionTime":"2025-12-06T05:29:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.044284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.044346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.044365 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.044387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.044402 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.147875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.147945 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.147955 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.147989 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.148002 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.251010 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.251060 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.251071 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.251105 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.251124 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.353911 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.353949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.353958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.353973 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.353984 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.456163 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.456222 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.456232 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.456252 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.456263 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.558639 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.558702 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.558715 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.558731 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.558743 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.661154 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.661204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.661218 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.661240 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.661256 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.761215 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.761269 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:57 crc kubenswrapper[4958]: E1206 05:29:57.761366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.761217 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:57 crc kubenswrapper[4958]: E1206 05:29:57.761525 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:57 crc kubenswrapper[4958]: E1206 05:29:57.761625 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.761669 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:57 crc kubenswrapper[4958]: E1206 05:29:57.761800 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.763854 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.763900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.763915 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.763930 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.763940 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.866670 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.866706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.866717 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.866733 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.866744 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.969155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.969193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.969203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.969219 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:57 crc kubenswrapper[4958]: I1206 05:29:57.969231 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:57Z","lastTransitionTime":"2025-12-06T05:29:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.072561 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.072637 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.072655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.072676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.072691 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.175203 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.175259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.175287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.175337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.175361 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.279193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.279270 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.279295 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.279327 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.279351 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.386083 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.386140 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.386158 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.386181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.386199 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.488776 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.488886 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.488906 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.488932 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.488951 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.591216 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.591260 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.591271 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.591289 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.591300 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.694287 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.694333 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.694346 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.694364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.694375 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.797209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.797304 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.797368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.797402 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.797467 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.905019 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.905107 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.905130 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.905162 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:58 crc kubenswrapper[4958]: I1206 05:29:58.905185 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:58Z","lastTransitionTime":"2025-12-06T05:29:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.008316 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.008387 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.008410 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.008440 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.008498 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.111357 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.111426 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.111449 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.111513 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.111539 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.214550 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.214616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.214633 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.214655 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.214676 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.318044 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.318093 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.318111 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.318134 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.318151 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.421448 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.421553 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.421572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.421599 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.421623 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.525245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.525291 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.525302 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.525319 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.525331 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.628597 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.628665 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.628685 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.628712 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.628731 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.732189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.732312 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.732334 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.732403 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.732422 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.761135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.761198 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.761256 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.761272 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:29:59 crc kubenswrapper[4958]: E1206 05:29:59.761448 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:29:59 crc kubenswrapper[4958]: E1206 05:29:59.761618 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:29:59 crc kubenswrapper[4958]: E1206 05:29:59.761881 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:29:59 crc kubenswrapper[4958]: E1206 05:29:59.761965 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.780240 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.801036 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.819822 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.834755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.834814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.834826 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.834848 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.834860 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.854164 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.874031 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.885382 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.898172 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.912528 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.924453 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.936377 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.937706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.937740 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.937750 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.937768 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.937782 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:29:59Z","lastTransitionTime":"2025-12-06T05:29:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.954307 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.968275 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.976909 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:29:59 crc kubenswrapper[4958]: I1206 05:29:59.991355 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:29:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:29:59Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.005662 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.019583 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.035577 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.040622 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.040706 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.040726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.040787 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.040803 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.053418 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.072210 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:00Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.143164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.143193 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.143202 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.143214 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.143222 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.245559 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.245587 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.245595 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.245606 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.245614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.347814 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.347863 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.347875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.347892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.347903 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.450961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.451034 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.451054 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.451082 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.451099 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.554181 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.554256 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.554275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.554718 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.554768 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.658204 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.658259 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.658275 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.658300 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.658317 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.761301 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.761394 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.761421 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.761451 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.761509 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.864112 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.864179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.864196 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.864221 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.864240 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.967299 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.967361 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.967379 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.967404 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:00 crc kubenswrapper[4958]: I1206 05:30:00.967424 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:00Z","lastTransitionTime":"2025-12-06T05:30:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.069928 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.070248 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.070407 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.070593 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.070699 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.173543 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.173835 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.173923 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.173993 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.174059 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.276871 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.276922 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.276938 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.276961 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.276977 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.379902 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.379967 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.379984 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.380009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.380029 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.483261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.483337 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.483368 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.483401 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.483426 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.586496 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.586554 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.586572 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.586598 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.586618 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.689015 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.689280 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.689377 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.689506 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.689614 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.762084 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.762220 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.762264 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:01 crc kubenswrapper[4958]: E1206 05:30:01.762617 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.762926 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:01 crc kubenswrapper[4958]: E1206 05:30:01.763075 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:01 crc kubenswrapper[4958]: E1206 05:30:01.763861 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:01 crc kubenswrapper[4958]: E1206 05:30:01.764035 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.764724 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:30:01 crc kubenswrapper[4958]: E1206 05:30:01.765002 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\"" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.793795 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.793888 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.793917 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.793950 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.793969 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.897088 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.897147 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.897164 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.897189 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.897210 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:01 crc kubenswrapper[4958]: I1206 05:30:01.999760 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:01.999830 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:01.999853 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:01.999882 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:01.999902 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:01Z","lastTransitionTime":"2025-12-06T05:30:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.102384 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.102446 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.102464 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.102532 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.102549 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.205155 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.205588 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.205815 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.206025 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.206298 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.309870 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.309936 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.309958 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.309988 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.310009 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.381439 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/1.log" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.382254 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/0.log" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.382331 4958 generic.go:334] "Generic (PLEG): container finished" podID="fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7" containerID="5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c" exitCode=1 Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.382394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerDied","Data":"5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.382509 4958 scope.go:117] "RemoveContainer" containerID="24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.383141 4958 scope.go:117] "RemoveContainer" containerID="5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c" Dec 06 05:30:02 crc kubenswrapper[4958]: E1206 05:30:02.383414 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-wr7h5_openshift-multus(fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7)\"" pod="openshift-multus/multus-wr7h5" podUID="fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.413380 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.413744 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.413755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.413770 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.413779 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.417105 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c3942544-663d-452b-851f-7ae1951315d0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0c605d49963ffec3a03f9166f2a4b945adca772c1e6f9ef0c43e3b43498bb520\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fa2e5229064a9479f641c0c6d934e8eda769ee88be13401c43ae136ce8bc24\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0fa5b094aed0445fa892dd4d08fa83de565bda82c6ca9b9d536a3275df19b2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9356ce9f50a037c7bbb99085c8d3a1cf6648fca53bed4eb86d1be5cb82386bfe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bb5270cbb6f2e29e1393dc0724b38332f5d2b7bd2947352c848a22efbcd979c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4460d6ee5ad193a5fdd65c4ddf2e2837db3e1a0a5f8f32b84ad83cc74cc6c779\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2d0adb65980c4e04df9b38b248a2d6bf58ac214ec2fe6ee6f8fbee38dc114904\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b4a1b86d7480be2b839295c4c7dc60abd79e17d12514d7a7f90453498dd586e5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.430414 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.441753 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-bxxwq" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"802ce14e-baaf-4d0d-87e3-3457209d8cda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://691435e89fcfa5b2dbd5ee54379f791a3e4f4df43c500a83bc61f7b20aca7ae1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wlz6h\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-bxxwq\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.454874 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f69a7b70-83e2-4775-872c-cc414f3d08ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3077eaf7e49467b9443569425f7666246ccd40207193f35407bd5ca69799be6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c146f2ee32c0a80c9bb07f7b61d094a36ba1bdd250b578479193dc8e50dc7cad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qktwz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:42Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-j25xk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.468249 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.480274 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:49Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://12ef2ec44908dbfbd456d6a97cb50054adb79c14bc062daacdd1be9ec59404b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.492114 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-5mx5v" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4ab56cd4-7270-4252-b6e6-cbc102b84d97\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://16ff54e7650f6ee3df33d0394fb87b8021695f1697bdf30239e59ab458bea64c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-448jg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-5mx5v\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.511986 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-wr7h5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:30:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://24bb15c5aabe30ec2b8cea2cfd92c4a99c8e2170aecfe6cf33f745ecccf32c42\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:15Z\\\",\\\"message\\\":\\\"2025-12-06T05:28:30+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27\\\\n2025-12-06T05:28:30+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_38a7603c-7295-4bd3-af02-27bf794eed27 to /host/opt/cni/bin/\\\\n2025-12-06T05:28:30Z [verbose] multus-daemon started\\\\n2025-12-06T05:28:30Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:29:15Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:30:01Z\\\",\\\"message\\\":\\\"2025-12-06T05:29:16+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_97ff0bdc-2ab1-4109-afbc-41da3661d6fb\\\\n2025-12-06T05:29:16+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_97ff0bdc-2ab1-4109-afbc-41da3661d6fb to /host/opt/cni/bin/\\\\n2025-12-06T05:29:16Z [verbose] multus-daemon started\\\\n2025-12-06T05:29:16Z [verbose] Readiness Indicator file check\\\\n2025-12-06T05:30:01Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-778ks\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-wr7h5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.516342 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.516412 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.516434 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.516460 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.516511 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.527190 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:13Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.542566 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4e7d0d99-10e2-4254-82b5-5490222f80fb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f512fa9922310920769bab096c2bb8b2305a17dded545923c16b5e417db50b77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://50e80d39fcfd69db193b9e6c98c92a60983dd0c62b061a69f045193110d159c9\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bc69c938c136778135bd2a90d1671af107b76c4571f8e52327c98f120665d7b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.565824 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"48b90387-4ea9-47a3-8aa1-b8b19c5ed186\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce9f0a37477909d730255694146829d5d4527e3c3c3928bdd475b58506bbd544\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d3ce1b34e648dbc34656fb0430468d6a998992cdb52f7858c3e9032781cabf3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://45259e88c3c2b9523b20a34f02fb59aa21e487dd207c8cb6e29ac571d89cd75c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5907422cf6b36b8d77741d339e507eee8a26c62b51c03458fbab243363441b49\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fdba082661b1dd38d5628fedac7afcea2e2960d502c61b9c21c723c0f6be8dab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://afdd138a15c3949f3eb5cdcac4d372fe8db3f1822f9fa33eb7f211ba93764903\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://09ba0a78226b9b3b3b12202389168f7e17a1f2ba992a22dc9d436d29aac0d327\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pzwkj\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-cxvtt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.594694 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4c75c3b8-96d9-442e-b3c4-92d10ad33929\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-06T05:29:31Z\\\",\\\"message\\\":\\\"ces.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI1206 05:29:30.743372 6921 services_controller.go:452] Built service openshift-apiserver/api per-node LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743368 6921 services_controller.go:443] Built service openshift-console-operator/metrics LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.4.88\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:443, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI1206 05:29:30.743388 6921 services_controller.go:453] Built service openshift-apiserver/api template LB for network=default: []services.LB{}\\\\nI1206 05:29:30.743398 6921 services_controller.go:444] Built service openshift-console-operator/metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nF1206 05:29:30.742791 6921 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-06T05:29:29Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-f4swt_openshift-ovn-kubernetes(4c75c3b8-96d9-442e-b3c4-92d10ad33929)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k5mzl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:30Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-f4swt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.609962 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2ba128f4-2e42-45be-bda0-0253dd5502d4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74f69d05943422b54bfdaf337dabe66d5a16adff7bcd0aec89231dfaedb42ade\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7af93ebbf44bd284a93e374862eb9b1f0ca1e2426eb88472157c5dcabf8c63b3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.618939 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.618978 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.618990 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.619006 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.619018 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.626445 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"326ec8f3-f884-4d81-85c0-0fb98ff16b80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2ab8b97e84e2f0ddf99e989d15f84695ae263c722141d62885129f9eb48226a5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bfa163f9e3eb5b2418860c0fa7fcbf96990229ac20da3d72cf18d3c326f466b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://15f70e4124ff7d2705704dab6bf80d6ee77c2181e9bb400f7af227e9c18f0d20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aabbafda3fdfc2434de99224b35e72d2d536ed8b4af7dbdfa16058ab7f638531\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-06T05:28:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-06T05:28:10Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:09Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.640389 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:46Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://22338e19d740d76b4395405aae08f7cc69ca43dcf680a2bac1323c989e74aa02\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.653153 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:27Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.668551 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://85e394f19d7c9a454876e48621768d9f956790321aeea3140db17586f532e28c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7cba9186696fbc52dacdd8c02c28a44f8e4f11e2bd39065212cddf3acdd7f2d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.685193 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c13528c0-da5d-4d55-9155-2c29c33edfc4\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:29Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5cd9b0b70572f52b50e426a747be79d9f46795a9f5e890f4c6837ba67a34188\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-06T05:28:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-v95sc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:29Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5ktnh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.699627 4958 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kb98t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c09fca2-7d91-412a-9814-64370d35b3e9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-06T05:28:43Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2pscv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-06T05:28:43Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kb98t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-06T05:30:02Z is after 2025-08-24T17:21:41Z" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.722442 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.722529 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.722552 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.722578 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.722597 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.825804 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.825875 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.825900 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.825931 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.825955 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.929895 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.929980 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.930009 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.930077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:02 crc kubenswrapper[4958]: I1206 05:30:02.930121 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:02Z","lastTransitionTime":"2025-12-06T05:30:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.032819 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.032880 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.032907 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.032941 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.032964 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.135245 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.135281 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.135293 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.135311 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.135335 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.237946 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.237997 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.238012 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.238035 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.238049 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.340544 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.340623 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.340647 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.340676 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.340696 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.389051 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/1.log" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.444207 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.444305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.444364 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.444389 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.444406 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.547711 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.547778 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.547801 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.547829 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.547855 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.650762 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.650821 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.650841 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.650866 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.650884 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.754052 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.754127 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.754160 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.754188 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.754210 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.762631 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.762693 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:03 crc kubenswrapper[4958]: E1206 05:30:03.762793 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.762865 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.762866 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:03 crc kubenswrapper[4958]: E1206 05:30:03.763004 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:03 crc kubenswrapper[4958]: E1206 05:30:03.763172 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:03 crc kubenswrapper[4958]: E1206 05:30:03.763334 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.856916 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.856966 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.856976 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.856991 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.857000 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.959634 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.959703 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.959726 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.959755 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:03 crc kubenswrapper[4958]: I1206 05:30:03.959777 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:03Z","lastTransitionTime":"2025-12-06T05:30:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.062500 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.062538 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.062549 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.062564 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.062576 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.165865 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.165912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.165929 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.165951 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.165968 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.269209 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.269261 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.269279 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.269305 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.269324 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.372616 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.372678 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.372698 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.372723 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.372740 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.475461 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.475534 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.475649 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.475683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.475703 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.579834 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.579903 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.579924 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.579949 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.579966 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.683965 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.684037 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.684053 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.684078 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.684094 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.786584 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.786664 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.786683 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.786707 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.786724 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.890094 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.890161 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.890179 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.890205 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.890223 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.992824 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.992873 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.992890 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.992912 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:04 crc kubenswrapper[4958]: I1206 05:30:04.992930 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:04Z","lastTransitionTime":"2025-12-06T05:30:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.096077 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.096131 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.096149 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.096171 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.096186 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:05Z","lastTransitionTime":"2025-12-06T05:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.198229 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.198264 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.198273 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.198284 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.198295 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:05Z","lastTransitionTime":"2025-12-06T05:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.300657 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.300798 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.300828 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.300889 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.300908 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:05Z","lastTransitionTime":"2025-12-06T05:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.362816 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.362878 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.362892 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.362909 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.362922 4958 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-06T05:30:05Z","lastTransitionTime":"2025-12-06T05:30:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.426235 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr"] Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.426809 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.431152 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.431405 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.431638 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.433666 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.466409 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5mx5v" podStartSLOduration=96.466382211 podStartE2EDuration="1m36.466382211s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.466029732 +0000 UTC m=+115.999800505" watchObservedRunningTime="2025-12-06 05:30:05.466382211 +0000 UTC m=+116.000153024" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.524130 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=95.524109947 podStartE2EDuration="1m35.524109947s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.522890705 +0000 UTC m=+116.056661488" watchObservedRunningTime="2025-12-06 05:30:05.524109947 +0000 UTC m=+116.057880720" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.563692 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6250dc0b-e97b-4b6f-a311-2eecda273c4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.563748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6250dc0b-e97b-4b6f-a311-2eecda273c4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.563765 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.563784 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6250dc0b-e97b-4b6f-a311-2eecda273c4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.563814 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.571105 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=67.571092784 podStartE2EDuration="1m7.571092784s" podCreationTimestamp="2025-12-06 05:28:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.570086947 +0000 UTC m=+116.103857720" watchObservedRunningTime="2025-12-06 05:30:05.571092784 +0000 UTC m=+116.104863547" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.571406 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=97.571400922 podStartE2EDuration="1m37.571400922s" podCreationTimestamp="2025-12-06 05:28:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.557268133 +0000 UTC m=+116.091038886" watchObservedRunningTime="2025-12-06 05:30:05.571400922 +0000 UTC m=+116.105171685" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.660052 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podStartSLOduration=96.660034275 podStartE2EDuration="1m36.660034275s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.636262254 +0000 UTC m=+116.170033017" watchObservedRunningTime="2025-12-06 05:30:05.660034275 +0000 UTC m=+116.193805048" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.660530 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-cxvtt" podStartSLOduration=96.660520707 podStartE2EDuration="1m36.660520707s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.658227638 +0000 UTC m=+116.191998411" watchObservedRunningTime="2025-12-06 05:30:05.660520707 +0000 UTC m=+116.194291520" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664505 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6250dc0b-e97b-4b6f-a311-2eecda273c4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664593 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6250dc0b-e97b-4b6f-a311-2eecda273c4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6250dc0b-e97b-4b6f-a311-2eecda273c4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.664731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6250dc0b-e97b-4b6f-a311-2eecda273c4d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.666226 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6250dc0b-e97b-4b6f-a311-2eecda273c4d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.673556 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6250dc0b-e97b-4b6f-a311-2eecda273c4d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.680697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6250dc0b-e97b-4b6f-a311-2eecda273c4d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-ml2lr\" (UID: \"6250dc0b-e97b-4b6f-a311-2eecda273c4d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.702082 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=45.702063592 podStartE2EDuration="45.702063592s" podCreationTimestamp="2025-12-06 05:29:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.701419975 +0000 UTC m=+116.235190758" watchObservedRunningTime="2025-12-06 05:30:05.702063592 +0000 UTC m=+116.235834355" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.740908 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-bxxwq" podStartSLOduration=96.740893025 podStartE2EDuration="1m36.740893025s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.740259398 +0000 UTC m=+116.274030171" watchObservedRunningTime="2025-12-06 05:30:05.740893025 +0000 UTC m=+116.274663779" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.745224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.756702 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-j25xk" podStartSLOduration=95.756681887 podStartE2EDuration="1m35.756681887s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.753242618 +0000 UTC m=+116.287013401" watchObservedRunningTime="2025-12-06 05:30:05.756681887 +0000 UTC m=+116.290452650" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.761776 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.761871 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.761938 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:05 crc kubenswrapper[4958]: E1206 05:30:05.761943 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.761963 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:05 crc kubenswrapper[4958]: E1206 05:30:05.762047 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:05 crc kubenswrapper[4958]: E1206 05:30:05.762124 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:05 crc kubenswrapper[4958]: E1206 05:30:05.762178 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:05 crc kubenswrapper[4958]: I1206 05:30:05.815283 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=94.815269037 podStartE2EDuration="1m34.815269037s" podCreationTimestamp="2025-12-06 05:28:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:05.814553907 +0000 UTC m=+116.348324680" watchObservedRunningTime="2025-12-06 05:30:05.815269037 +0000 UTC m=+116.349039800" Dec 06 05:30:06 crc kubenswrapper[4958]: I1206 05:30:06.400010 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" event={"ID":"6250dc0b-e97b-4b6f-a311-2eecda273c4d","Type":"ContainerStarted","Data":"3bdbe393b76d79dff59d54555de35b3020f0b0f0df9436350b16963afd182926"} Dec 06 05:30:06 crc kubenswrapper[4958]: I1206 05:30:06.400086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" event={"ID":"6250dc0b-e97b-4b6f-a311-2eecda273c4d","Type":"ContainerStarted","Data":"3f18c1e4ab113d342601b24a1896da88fd7d791b7e0980b28493073273b06e32"} Dec 06 05:30:06 crc kubenswrapper[4958]: I1206 05:30:06.420087 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-ml2lr" podStartSLOduration=97.420069322 podStartE2EDuration="1m37.420069322s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:06.419380573 +0000 UTC m=+116.953151366" watchObservedRunningTime="2025-12-06 05:30:06.420069322 +0000 UTC m=+116.953840095" Dec 06 05:30:07 crc kubenswrapper[4958]: I1206 05:30:07.762040 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:07 crc kubenswrapper[4958]: I1206 05:30:07.762167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:07 crc kubenswrapper[4958]: E1206 05:30:07.762176 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:07 crc kubenswrapper[4958]: I1206 05:30:07.762384 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:07 crc kubenswrapper[4958]: E1206 05:30:07.762599 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:07 crc kubenswrapper[4958]: E1206 05:30:07.762754 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:07 crc kubenswrapper[4958]: I1206 05:30:07.763411 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:07 crc kubenswrapper[4958]: E1206 05:30:07.763721 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.728199 4958 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 06 05:30:09 crc kubenswrapper[4958]: I1206 05:30:09.761296 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:09 crc kubenswrapper[4958]: I1206 05:30:09.763142 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:09 crc kubenswrapper[4958]: I1206 05:30:09.763199 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:09 crc kubenswrapper[4958]: I1206 05:30:09.763215 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.763308 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.763464 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.763559 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.763671 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:09 crc kubenswrapper[4958]: E1206 05:30:09.842973 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 05:30:11 crc kubenswrapper[4958]: I1206 05:30:11.761816 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:11 crc kubenswrapper[4958]: I1206 05:30:11.762453 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:11 crc kubenswrapper[4958]: I1206 05:30:11.763428 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:11 crc kubenswrapper[4958]: I1206 05:30:11.763583 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:11 crc kubenswrapper[4958]: E1206 05:30:11.763174 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:11 crc kubenswrapper[4958]: E1206 05:30:11.764353 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:11 crc kubenswrapper[4958]: E1206 05:30:11.765038 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:11 crc kubenswrapper[4958]: E1206 05:30:11.766270 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:13 crc kubenswrapper[4958]: I1206 05:30:13.761828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:13 crc kubenswrapper[4958]: I1206 05:30:13.761943 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:13 crc kubenswrapper[4958]: I1206 05:30:13.761834 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:13 crc kubenswrapper[4958]: E1206 05:30:13.762005 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:13 crc kubenswrapper[4958]: I1206 05:30:13.762032 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:13 crc kubenswrapper[4958]: E1206 05:30:13.762209 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:13 crc kubenswrapper[4958]: E1206 05:30:13.762318 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:13 crc kubenswrapper[4958]: E1206 05:30:13.762396 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:14 crc kubenswrapper[4958]: E1206 05:30:14.844421 4958 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 06 05:30:15 crc kubenswrapper[4958]: I1206 05:30:15.761263 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:15 crc kubenswrapper[4958]: I1206 05:30:15.761330 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:15 crc kubenswrapper[4958]: I1206 05:30:15.761355 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:15 crc kubenswrapper[4958]: I1206 05:30:15.761271 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:15 crc kubenswrapper[4958]: E1206 05:30:15.761573 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:15 crc kubenswrapper[4958]: E1206 05:30:15.761730 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:15 crc kubenswrapper[4958]: E1206 05:30:15.761918 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:15 crc kubenswrapper[4958]: E1206 05:30:15.762046 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:16 crc kubenswrapper[4958]: I1206 05:30:16.762128 4958 scope.go:117] "RemoveContainer" containerID="5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c" Dec 06 05:30:16 crc kubenswrapper[4958]: I1206 05:30:16.763545 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.438550 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.441932 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerStarted","Data":"73bec7ebc476d53009c9e97d60b92a1f63469eb422461d79727d6b5234b5ce4d"} Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.442379 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.443931 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/1.log" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.443979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerStarted","Data":"1ab05298aa4f4178d78e78302f50c4335c0cbebb3325d23082e9d5818d092121"} Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.480016 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podStartSLOduration=108.479995687 podStartE2EDuration="1m48.479995687s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:17.479926696 +0000 UTC m=+128.013697519" watchObservedRunningTime="2025-12-06 05:30:17.479995687 +0000 UTC m=+128.013766440" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.495989 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-wr7h5" podStartSLOduration=108.495959684 podStartE2EDuration="1m48.495959684s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:17.495309777 +0000 UTC m=+128.029080550" watchObservedRunningTime="2025-12-06 05:30:17.495959684 +0000 UTC m=+128.029730487" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.603540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kb98t"] Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.603663 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:17 crc kubenswrapper[4958]: E1206 05:30:17.603736 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.761719 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.761793 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:17 crc kubenswrapper[4958]: I1206 05:30:17.761842 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:17 crc kubenswrapper[4958]: E1206 05:30:17.761916 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:17 crc kubenswrapper[4958]: E1206 05:30:17.762017 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:17 crc kubenswrapper[4958]: E1206 05:30:17.762107 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:19 crc kubenswrapper[4958]: I1206 05:30:19.762031 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:19 crc kubenswrapper[4958]: I1206 05:30:19.762120 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:19 crc kubenswrapper[4958]: I1206 05:30:19.762156 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:19 crc kubenswrapper[4958]: I1206 05:30:19.762219 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:19 crc kubenswrapper[4958]: E1206 05:30:19.763797 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 06 05:30:19 crc kubenswrapper[4958]: E1206 05:30:19.763930 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kb98t" podUID="2c09fca2-7d91-412a-9814-64370d35b3e9" Dec 06 05:30:19 crc kubenswrapper[4958]: E1206 05:30:19.764081 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 06 05:30:19 crc kubenswrapper[4958]: E1206 05:30:19.764249 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.761533 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.761593 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.761714 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.761934 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.764195 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.764615 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.765041 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.765189 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.766169 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 05:30:21 crc kubenswrapper[4958]: I1206 05:30:21.766543 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 05:30:25 crc kubenswrapper[4958]: I1206 05:30:25.817568 4958 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.044379 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045031 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjjhf"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045193 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045384 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045640 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045761 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.045999 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-thm8z"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.046102 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.046435 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.047401 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2hl9j"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.047812 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048083 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5hcwh"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048348 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048783 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.049119 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.048798 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8c65"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.049749 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.050162 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.050609 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.052642 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.052938 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-79gtn"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.053081 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.053188 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.053206 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.053259 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.054249 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.054636 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.054795 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.055264 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qp66v"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056002 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2hl9j"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056021 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056034 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056047 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjjhf"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056060 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-thm8z"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056072 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8c65"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056084 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056094 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056105 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5hcwh"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056117 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056129 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056145 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qp66v"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056157 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-79gtn"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056168 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056179 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056264 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.056743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.057207 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.069076 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.077701 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.099398 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.099797 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.099962 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100284 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100517 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100780 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100835 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100869 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.100931 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.101002 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.101057 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.101124 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102297 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102394 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102419 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102614 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102653 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102756 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.102717 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.103016 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.103101 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.104076 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.104764 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.117870 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125293 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125509 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125651 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125687 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125819 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.125908 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.126790 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.130334 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.130670 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.130727 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131002 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131181 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131202 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131279 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131358 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131419 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131433 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131537 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131612 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131614 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.131690 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134589 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134728 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134795 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134917 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134956 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135034 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135111 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135142 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135161 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135235 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135249 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135282 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135346 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135356 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135371 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135404 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135454 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135459 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135493 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135511 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135546 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135610 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135628 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135705 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135811 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135825 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135891 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135974 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.136051 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.136209 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137282 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135116 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.135239 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137491 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.138020 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137552 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137565 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.134918 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137729 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wndrl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137595 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137744 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137781 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137812 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137846 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.137937 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.138910 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.141364 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.142490 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.143767 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.144262 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.145194 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.145672 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.145739 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.152439 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.159185 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.161604 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.163926 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.168642 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.168697 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.169010 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.169235 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.169462 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.170737 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.172576 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.177006 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.178760 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.179325 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.179451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.182593 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.183259 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-8s5qc"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.184368 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.184462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.185023 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.185301 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.185532 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.187100 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.187221 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-56k5h"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.187308 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.188063 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.188497 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.191895 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192049 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192410 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192691 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192815 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-ph6g5"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192935 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192975 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.192995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193016 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6qr8\" (UniqueName: \"kubernetes.io/projected/651c6106-fa9c-43ea-b9c2-68b77e1652c8-kube-api-access-j6qr8\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193031 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46twc\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-kube-api-access-46twc\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-auth-proxy-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193062 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651c6106-fa9c-43ea-b9c2-68b77e1652c8-serving-cert\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193091 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-config\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193109 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0b1738e-3696-4786-b978-4dee25dde9ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193233 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbb2l\" (UniqueName: \"kubernetes.io/projected/d123f533-79f0-4797-acae-bb101594ea67-kube-api-access-wbb2l\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-client\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193277 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1095edf8-32a1-431c-9f73-e8738668d563-serving-cert\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193302 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-serving-cert\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193317 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6dbn\" (UniqueName: \"kubernetes.io/projected/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-kube-api-access-k6dbn\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-serving-cert\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbb6238-233b-43b5-a7cc-a142ff3fce2a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193378 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193541 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193566 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193628 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82c4f\" (UniqueName: \"kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46dbd477-d07a-4732-a9e3-08e1d49385c3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkf4\" (UniqueName: \"kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193683 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qm9g4\" (UniqueName: \"kubernetes.io/projected/46dbd477-d07a-4732-a9e3-08e1d49385c3-kube-api-access-qm9g4\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193701 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cg9rz\" (UniqueName: \"kubernetes.io/projected/abbb6238-233b-43b5-a7cc-a142ff3fce2a-kube-api-access-cg9rz\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193717 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193732 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8db5\" (UniqueName: \"kubernetes.io/projected/f5ca2592-609c-489b-bbbb-51d8535f8e68-kube-api-access-j8db5\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193748 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-trusted-ca\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193762 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193777 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v8phv\" (UniqueName: \"kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-images\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193830 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193857 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbb6238-233b-43b5-a7cc-a142ff3fce2a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193881 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.193920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq9jq\" (UniqueName: \"kubernetes.io/projected/1095edf8-32a1-431c-9f73-e8738668d563-kube-api-access-wq9jq\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-serving-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194610 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194641 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194661 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-dir\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194720 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194746 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651c6106-fa9c-43ea-b9c2-68b77e1652c8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ntzz\" (UniqueName: \"kubernetes.io/projected/8c2597de-216f-46fb-996c-0f0c2598cacc-kube-api-access-8ntzz\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194808 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194832 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194858 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-policies\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194918 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194946 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.194970 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9trk\" (UniqueName: \"kubernetes.io/projected/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-kube-api-access-p9trk\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-image-import-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195082 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-audit\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-encryption-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195157 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195186 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-encryption-config\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4xsm\" (UniqueName: \"kubernetes.io/projected/9ba34fce-2bfd-4b37-8b67-f2c936cc1b44-kube-api-access-v4xsm\") pod \"downloads-7954f5f757-79gtn\" (UID: \"9ba34fce-2bfd-4b37-8b67-f2c936cc1b44\") " pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-config\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195270 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/978a529c-5f37-4f03-92b4-1ef8084e5917-machine-approver-tls\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195301 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195333 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-node-pullsecrets\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-audit-dir\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195404 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195454 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d123f533-79f0-4797-acae-bb101594ea67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-client\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195588 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-config\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195620 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-serving-cert\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jblqp\" (UniqueName: \"kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0b1738e-3696-4786-b978-4dee25dde9ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195694 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcg5x\" (UniqueName: \"kubernetes.io/projected/978a529c-5f37-4f03-92b4-1ef8084e5917-kube-api-access-gcg5x\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195698 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.195712 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.196098 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.196718 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5sxpr"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.197881 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.199132 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.200000 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.201035 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.201794 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.202455 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.203087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.203614 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-zckk4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.204054 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.205013 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fnjd6"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.206393 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.207740 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.209172 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.213553 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.213957 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-zhvgf"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.216288 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.217363 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.220033 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wndrl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.222002 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.223469 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.224934 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-56k5h"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.226715 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.228464 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.230087 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wk6vl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.231116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.231213 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.232145 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fnjd6"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.233448 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8s5qc"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.236422 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.237829 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.239308 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.240748 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.242080 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.243479 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.244724 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.246160 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.247266 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.247623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.248808 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5sxpr"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.249960 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.250943 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhvgf"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.251928 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.253328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wk6vl"] Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.267976 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.287677 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297051 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297083 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82c4f\" (UniqueName: \"kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46dbd477-d07a-4732-a9e3-08e1d49385c3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297136 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkkf4\" (UniqueName: \"kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297155 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297176 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-trusted-ca\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qm9g4\" (UniqueName: \"kubernetes.io/projected/46dbd477-d07a-4732-a9e3-08e1d49385c3-kube-api-access-qm9g4\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cg9rz\" (UniqueName: \"kubernetes.io/projected/abbb6238-233b-43b5-a7cc-a142ff3fce2a-kube-api-access-cg9rz\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5435d984-69a2-4441-a63d-fde03d6a7081-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-trusted-ca\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297373 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v8phv\" (UniqueName: \"kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297393 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-images\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297432 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbb6238-233b-43b5-a7cc-a142ff3fce2a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8db5\" (UniqueName: \"kubernetes.io/projected/f5ca2592-609c-489b-bbbb-51d8535f8e68-kube-api-access-j8db5\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wq9jq\" (UniqueName: \"kubernetes.io/projected/1095edf8-32a1-431c-9f73-e8738668d563-kube-api-access-wq9jq\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297526 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ts577\" (UniqueName: \"kubernetes.io/projected/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-kube-api-access-ts577\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297549 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5435d984-69a2-4441-a63d-fde03d6a7081-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297571 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3131ee86-c122-498a-872b-eb20260b6639-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.297952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298022 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298045 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-serving-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298082 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be188a0-8e11-40a8-b38e-d3d5475c982b-serving-cert\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298112 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzqg\" (UniqueName: \"kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298128 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-dir\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298146 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651c6106-fa9c-43ea-b9c2-68b77e1652c8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ntzz\" (UniqueName: \"kubernetes.io/projected/8c2597de-216f-46fb-996c-0f0c2598cacc-kube-api-access-8ntzz\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298197 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298213 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298228 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-policies\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298244 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znbn6\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-kube-api-access-znbn6\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298297 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298314 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298328 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-metrics-tls\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztl8b\" (UniqueName: \"kubernetes.io/projected/4baa5071-6d1a-4771-ab23-d68db9f231a3-kube-api-access-ztl8b\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9trk\" (UniqueName: \"kubernetes.io/projected/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-kube-api-access-p9trk\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-image-import-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298432 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-audit\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-encryption-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298464 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298495 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-encryption-config\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4xsm\" (UniqueName: \"kubernetes.io/projected/9ba34fce-2bfd-4b37-8b67-f2c936cc1b44-kube-api-access-v4xsm\") pod \"downloads-7954f5f757-79gtn\" (UID: \"9ba34fce-2bfd-4b37-8b67-f2c936cc1b44\") " pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/978a529c-5f37-4f03-92b4-1ef8084e5917-machine-approver-tls\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298544 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-node-pullsecrets\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298577 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be188a0-8e11-40a8-b38e-d3d5475c982b-config\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-config\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298655 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d123f533-79f0-4797-acae-bb101594ea67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-client\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-audit-dir\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b06c782b-7cfd-4d0d-9f40-590645abde2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298735 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-config\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298767 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-serving-cert\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298783 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jblqp\" (UniqueName: \"kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0b1738e-3696-4786-b978-4dee25dde9ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298818 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcg5x\" (UniqueName: \"kubernetes.io/projected/978a529c-5f37-4f03-92b4-1ef8084e5917-kube-api-access-gcg5x\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298890 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6qr8\" (UniqueName: \"kubernetes.io/projected/651c6106-fa9c-43ea-b9c2-68b77e1652c8-kube-api-access-j6qr8\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46twc\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-kube-api-access-46twc\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/8edfe72a-ade0-4c45-9bff-9366b7e53c54-kube-api-access-vgtxd\") pod \"migrator-59844c95c7-f47jc\" (UID: \"8edfe72a-ade0-4c45-9bff-9366b7e53c54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298955 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgcn4\" (UniqueName: \"kubernetes.io/projected/3131ee86-c122-498a-872b-eb20260b6639-kube-api-access-rgcn4\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298973 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5435d984-69a2-4441-a63d-fde03d6a7081-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.298992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-auth-proxy-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299007 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299021 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06c782b-7cfd-4d0d-9f40-590645abde2f-config\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651c6106-fa9c-43ea-b9c2-68b77e1652c8-serving-cert\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299055 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4x9f\" (UniqueName: \"kubernetes.io/projected/e6734918-c336-4932-b571-12cab28ef213-kube-api-access-d4x9f\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-config\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0b1738e-3696-4786-b978-4dee25dde9ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299135 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3131ee86-c122-498a-872b-eb20260b6639-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299155 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299171 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299188 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbb2l\" (UniqueName: \"kubernetes.io/projected/d123f533-79f0-4797-acae-bb101594ea67-kube-api-access-wbb2l\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn8zz\" (UniqueName: \"kubernetes.io/projected/4be188a0-8e11-40a8-b38e-d3d5475c982b-kube-api-access-kn8zz\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llztj\" (UniqueName: \"kubernetes.io/projected/e904c67d-5ffa-4e8f-96d6-be8c569a22db-kube-api-access-llztj\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-client\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6734918-c336-4932-b571-12cab28ef213-proxy-tls\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299316 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1095edf8-32a1-431c-9f73-e8738668d563-serving-cert\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299341 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6734918-c336-4932-b571-12cab28ef213-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299357 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e904c67d-5ffa-4e8f-96d6-be8c569a22db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-serving-cert\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6dbn\" (UniqueName: \"kubernetes.io/projected/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-kube-api-access-k6dbn\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-serving-cert\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06c782b-7cfd-4d0d-9f40-590645abde2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299507 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299531 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbb6238-233b-43b5-a7cc-a142ff3fce2a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4baa5071-6d1a-4771-ab23-d68db9f231a3-cert\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.299583 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.300675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.302332 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-policies\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.302366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-config\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.302392 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.302579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.302646 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.303021 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/46dbd477-d07a-4732-a9e3-08e1d49385c3-images\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.303658 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.303968 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304189 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-trusted-ca\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304511 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-config\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-audit-dir\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304664 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-auth-proxy-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304919 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304925 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.304967 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/8c2597de-216f-46fb-996c-0f0c2598cacc-node-pullsecrets\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.305326 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f5ca2592-609c-489b-bbbb-51d8535f8e68-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.305820 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d0b1738e-3696-4786-b978-4dee25dde9ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.305972 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.306324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/978a529c-5f37-4f03-92b4-1ef8084e5917-config\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.306423 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-config\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.306941 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f5ca2592-609c-489b-bbbb-51d8535f8e68-audit-dir\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307191 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307251 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307493 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307536 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1095edf8-32a1-431c-9f73-e8738668d563-config\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.307790 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/651c6106-fa9c-43ea-b9c2-68b77e1652c8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.308337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-encryption-config\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.308482 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-client\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.309080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/46dbd477-d07a-4732-a9e3-08e1d49385c3-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.309098 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-audit\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.309174 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.309485 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-image-import-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.309778 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abbb6238-233b-43b5-a7cc-a142ff3fce2a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-etcd-serving-ca\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310221 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310276 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310814 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-service-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d0b1738e-3696-4786-b978-4dee25dde9ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.310919 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8c2597de-216f-46fb-996c-0f0c2598cacc-trusted-ca-bundle\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.311055 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d123f533-79f0-4797-acae-bb101594ea67-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.311080 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.311454 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312312 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abbb6238-233b-43b5-a7cc-a142ff3fce2a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312629 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-serving-cert\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/651c6106-fa9c-43ea-b9c2-68b77e1652c8-serving-cert\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.312987 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.313020 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.313459 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.313514 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1095edf8-32a1-431c-9f73-e8738668d563-serving-cert\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.313943 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.314224 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.314669 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-encryption-config\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.314986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8c2597de-216f-46fb-996c-0f0c2598cacc-serving-cert\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.315093 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/978a529c-5f37-4f03-92b4-1ef8084e5917-machine-approver-tls\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.315130 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.315328 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-serving-cert\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.315415 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f5ca2592-609c-489b-bbbb-51d8535f8e68-etcd-client\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.315428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.316162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.328714 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.361902 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.376921 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.390264 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znbn6\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-kube-api-access-znbn6\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400791 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400820 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-metrics-tls\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400834 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztl8b\" (UniqueName: \"kubernetes.io/projected/4baa5071-6d1a-4771-ab23-d68db9f231a3-kube-api-access-ztl8b\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be188a0-8e11-40a8-b38e-d3d5475c982b-config\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400886 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b06c782b-7cfd-4d0d-9f40-590645abde2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400932 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/8edfe72a-ade0-4c45-9bff-9366b7e53c54-kube-api-access-vgtxd\") pod \"migrator-59844c95c7-f47jc\" (UID: \"8edfe72a-ade0-4c45-9bff-9366b7e53c54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400948 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5435d984-69a2-4441-a63d-fde03d6a7081-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgcn4\" (UniqueName: \"kubernetes.io/projected/3131ee86-c122-498a-872b-eb20260b6639-kube-api-access-rgcn4\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06c782b-7cfd-4d0d-9f40-590645abde2f-config\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.400996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4x9f\" (UniqueName: \"kubernetes.io/projected/e6734918-c336-4932-b571-12cab28ef213-kube-api-access-d4x9f\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401012 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3131ee86-c122-498a-872b-eb20260b6639-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llztj\" (UniqueName: \"kubernetes.io/projected/e904c67d-5ffa-4e8f-96d6-be8c569a22db-kube-api-access-llztj\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401049 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn8zz\" (UniqueName: \"kubernetes.io/projected/4be188a0-8e11-40a8-b38e-d3d5475c982b-kube-api-access-kn8zz\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6734918-c336-4932-b571-12cab28ef213-proxy-tls\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6734918-c336-4932-b571-12cab28ef213-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e904c67d-5ffa-4e8f-96d6-be8c569a22db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401145 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06c782b-7cfd-4d0d-9f40-590645abde2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4baa5071-6d1a-4771-ab23-d68db9f231a3-cert\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401186 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-trusted-ca\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5435d984-69a2-4441-a63d-fde03d6a7081-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401253 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ts577\" (UniqueName: \"kubernetes.io/projected/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-kube-api-access-ts577\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401267 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5435d984-69a2-4441-a63d-fde03d6a7081-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3131ee86-c122-498a-872b-eb20260b6639-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401338 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be188a0-8e11-40a8-b38e-d3d5475c982b-serving-cert\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.401353 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9gzqg\" (UniqueName: \"kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.402777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e6734918-c336-4932-b571-12cab28ef213-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.405278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e6734918-c336-4932-b571-12cab28ef213-proxy-tls\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.405518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e904c67d-5ffa-4e8f-96d6-be8c569a22db-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.407808 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.427794 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.447338 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.456381 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-metrics-tls\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.468302 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.495481 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.503223 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-trusted-ca\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.507142 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.527288 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.547676 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.553294 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5435d984-69a2-4441-a63d-fde03d6a7081-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.566786 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.575133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5435d984-69a2-4441-a63d-fde03d6a7081-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.588313 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.607732 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.627095 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.633462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b06c782b-7cfd-4d0d-9f40-590645abde2f-config\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.648123 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.656621 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b06c782b-7cfd-4d0d-9f40-590645abde2f-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.668658 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.677374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4baa5071-6d1a-4771-ab23-d68db9f231a3-cert\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.689242 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.708091 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.727675 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.749015 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.753085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4be188a0-8e11-40a8-b38e-d3d5475c982b-config\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.772135 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.779099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4be188a0-8e11-40a8-b38e-d3d5475c982b-serving-cert\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.794007 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.808460 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.827915 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.847522 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.867660 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.887622 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.898390 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.909056 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.933372 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.943615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.947240 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.968628 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 05:30:26 crc kubenswrapper[4958]: I1206 05:30:26.987377 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.007532 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.027236 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.047255 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.067272 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.087401 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.108416 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.127743 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.135962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3131ee86-c122-498a-872b-eb20260b6639-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.147872 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.166838 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.173200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3131ee86-c122-498a-872b-eb20260b6639-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.188322 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.206297 4958 request.go:700] Waited for 1.01339569s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/configmaps?fieldSelector=metadata.name%3Dmachine-config-operator-images&limit=500&resourceVersion=0 Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.208077 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.227984 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.247745 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.268570 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.287233 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.309046 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.327723 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.347353 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.368215 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.387333 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: E1206 05:30:27.402997 4958 secret.go:188] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Dec 06 05:30:27 crc kubenswrapper[4958]: E1206 05:30:27.403106 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls podName:81369d65-ae42-43fc-a2e6-dbf61d9a86d7 nodeName:}" failed. No retries permitted until 2025-12-06 05:30:27.903077679 +0000 UTC m=+138.436848472 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-78cbb6b69f-c5v8n" (UID: "81369d65-ae42-43fc-a2e6-dbf61d9a86d7") : failed to sync secret cache: timed out waiting for the condition Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.409074 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.429242 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.447756 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.468929 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.489274 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.508324 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.529019 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.548157 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.568016 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.588567 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.608224 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.628440 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.648025 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.667783 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.689961 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.729812 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.748625 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.768335 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.788976 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.808285 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.827102 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.847800 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.868600 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.887651 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.907650 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.924077 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.928344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.928401 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.949063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.968646 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 05:30:27 crc kubenswrapper[4958]: I1206 05:30:27.989058 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.008391 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.028572 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.048852 4958 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.068301 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.121213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qm9g4\" (UniqueName: \"kubernetes.io/projected/46dbd477-d07a-4732-a9e3-08e1d49385c3-kube-api-access-qm9g4\") pod \"machine-api-operator-5694c8668f-jjjhf\" (UID: \"46dbd477-d07a-4732-a9e3-08e1d49385c3\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.149545 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cg9rz\" (UniqueName: \"kubernetes.io/projected/abbb6238-233b-43b5-a7cc-a142ff3fce2a-kube-api-access-cg9rz\") pod \"openshift-controller-manager-operator-756b6f6bc6-x755r\" (UID: \"abbb6238-233b-43b5-a7cc-a142ff3fce2a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.167731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.188349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82c4f\" (UniqueName: \"kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f\") pod \"oauth-openshift-558db77b4-5hcwh\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.202693 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkkf4\" (UniqueName: \"kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4\") pod \"route-controller-manager-6576b87f9c-mshkv\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.206343 4958 request.go:700] Waited for 1.90386792s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/serviceaccounts/openshift-controller-manager-sa/token Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.224944 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.228733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v8phv\" (UniqueName: \"kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv\") pod \"controller-manager-879f6c89f-6lbg7\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.243691 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcg5x\" (UniqueName: \"kubernetes.io/projected/978a529c-5f37-4f03-92b4-1ef8084e5917-kube-api-access-gcg5x\") pod \"machine-approver-56656f9798-qdd6q\" (UID: \"978a529c-5f37-4f03-92b4-1ef8084e5917\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.260675 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.264407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6qr8\" (UniqueName: \"kubernetes.io/projected/651c6106-fa9c-43ea-b9c2-68b77e1652c8-kube-api-access-j6qr8\") pod \"openshift-config-operator-7777fb866f-qp66v\" (UID: \"651c6106-fa9c-43ea-b9c2-68b77e1652c8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.281189 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.283463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jblqp\" (UniqueName: \"kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp\") pod \"console-f9d7485db-7g42z\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.305878 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.307531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46twc\" (UniqueName: \"kubernetes.io/projected/d0b1738e-3696-4786-b978-4dee25dde9ac-kube-api-access-46twc\") pod \"cluster-image-registry-operator-dc59b4c8b-rhck7\" (UID: \"d0b1738e-3696-4786-b978-4dee25dde9ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.325862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8db5\" (UniqueName: \"kubernetes.io/projected/f5ca2592-609c-489b-bbbb-51d8535f8e68-kube-api-access-j8db5\") pod \"apiserver-7bbb656c7d-dnlff\" (UID: \"f5ca2592-609c-489b-bbbb-51d8535f8e68\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.343113 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.347538 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wq9jq\" (UniqueName: \"kubernetes.io/projected/1095edf8-32a1-431c-9f73-e8738668d563-kube-api-access-wq9jq\") pod \"console-operator-58897d9998-2hl9j\" (UID: \"1095edf8-32a1-431c-9f73-e8738668d563\") " pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.353298 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.360011 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.363405 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.366196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6dbn\" (UniqueName: \"kubernetes.io/projected/e6d69346-a75e-4edc-b2d8-ae6c7f612c62-kube-api-access-k6dbn\") pod \"openshift-apiserver-operator-796bbdcf4f-s7svh\" (UID: \"e6d69346-a75e-4edc-b2d8-ae6c7f612c62\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.385401 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbb2l\" (UniqueName: \"kubernetes.io/projected/d123f533-79f0-4797-acae-bb101594ea67-kube-api-access-wbb2l\") pod \"cluster-samples-operator-665b6dd947-mw66x\" (UID: \"d123f533-79f0-4797-acae-bb101594ea67\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.404586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9trk\" (UniqueName: \"kubernetes.io/projected/ed3eb5df-37e1-4849-96f5-5dea9dad7ee2-kube-api-access-p9trk\") pod \"authentication-operator-69f744f599-v8c65\" (UID: \"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.430887 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ntzz\" (UniqueName: \"kubernetes.io/projected/8c2597de-216f-46fb-996c-0f0c2598cacc-kube-api-access-8ntzz\") pod \"apiserver-76f77b778f-thm8z\" (UID: \"8c2597de-216f-46fb-996c-0f0c2598cacc\") " pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.448502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4xsm\" (UniqueName: \"kubernetes.io/projected/9ba34fce-2bfd-4b37-8b67-f2c936cc1b44-kube-api-access-v4xsm\") pod \"downloads-7954f5f757-79gtn\" (UID: \"9ba34fce-2bfd-4b37-8b67-f2c936cc1b44\") " pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.465518 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9gzqg\" (UniqueName: \"kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg\") pod \"marketplace-operator-79b997595-w8bt8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.476058 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.485602 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-jjjhf"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.485708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znbn6\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-kube-api-access-znbn6\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.499380 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.502294 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0-bound-sa-token\") pod \"ingress-operator-5b745b69d9-nfgl4\" (UID: \"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.508831 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.529419 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.531347 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.531697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztl8b\" (UniqueName: \"kubernetes.io/projected/4baa5071-6d1a-4771-ab23-d68db9f231a3-kube-api-access-ztl8b\") pod \"ingress-canary-8s5qc\" (UID: \"4baa5071-6d1a-4771-ab23-d68db9f231a3\") " pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.548388 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b06c782b-7cfd-4d0d-9f40-590645abde2f-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gm494\" (UID: \"b06c782b-7cfd-4d0d-9f40-590645abde2f\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.557383 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5hcwh"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.567062 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgtxd\" (UniqueName: \"kubernetes.io/projected/8edfe72a-ade0-4c45-9bff-9366b7e53c54-kube-api-access-vgtxd\") pod \"migrator-59844c95c7-f47jc\" (UID: \"8edfe72a-ade0-4c45-9bff-9366b7e53c54\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.580402 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.586162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5435d984-69a2-4441-a63d-fde03d6a7081-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-dmrfv\" (UID: \"5435d984-69a2-4441-a63d-fde03d6a7081\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.590903 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.606942 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.608688 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.608862 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.624852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.629512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn8zz\" (UniqueName: \"kubernetes.io/projected/4be188a0-8e11-40a8-b38e-d3d5475c982b-kube-api-access-kn8zz\") pod \"service-ca-operator-777779d784-kbgkk\" (UID: \"4be188a0-8e11-40a8-b38e-d3d5475c982b\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.648535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4x9f\" (UniqueName: \"kubernetes.io/projected/e6734918-c336-4932-b571-12cab28ef213-kube-api-access-d4x9f\") pod \"machine-config-controller-84d6567774-h5znl\" (UID: \"e6734918-c336-4932-b571-12cab28ef213\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.648934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgcn4\" (UniqueName: \"kubernetes.io/projected/3131ee86-c122-498a-872b-eb20260b6639-kube-api-access-rgcn4\") pod \"kube-storage-version-migrator-operator-b67b599dd-mmpcs\" (UID: \"3131ee86-c122-498a-872b-eb20260b6639\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.664689 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.676448 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.678577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llztj\" (UniqueName: \"kubernetes.io/projected/e904c67d-5ffa-4e8f-96d6-be8c569a22db-kube-api-access-llztj\") pod \"multus-admission-controller-857f4d67dd-wndrl\" (UID: \"e904c67d-5ffa-4e8f-96d6-be8c569a22db\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.679864 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.679964 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.697231 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.702348 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ts577\" (UniqueName: \"kubernetes.io/projected/81369d65-ae42-43fc-a2e6-dbf61d9a86d7-kube-api-access-ts577\") pod \"control-plane-machine-set-operator-78cbb6b69f-c5v8n\" (UID: \"81369d65-ae42-43fc-a2e6-dbf61d9a86d7\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.710899 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.814753 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.834977 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.863552 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qp66v"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.867957 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.890839 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x"] Dec 06 05:30:28 crc kubenswrapper[4958]: I1206 05:30:28.895390 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-2hl9j"] Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.494869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" event={"ID":"978a529c-5f37-4f03-92b4-1ef8084e5917","Type":"ContainerStarted","Data":"fbe539e21964e4d5b4d16347c4ce1829d18557e8c557eef6f688ea6cdde552cd"} Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.497232 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" event={"ID":"46dbd477-d07a-4732-a9e3-08e1d49385c3","Type":"ContainerStarted","Data":"5802e44f418853434d4ac128561090a68f9dd34b13dbdc73fd36a3e8d91d19cb"} Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.498865 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" event={"ID":"56e39030-25e1-4a9e-97e3-d84e988ec0da","Type":"ContainerStarted","Data":"2dbf2a41315ebea94715404621c956b84bcbb704e8035963742b5200de4303b5"} Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.500618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" event={"ID":"ade65bd8-3f30-4239-b717-a9912ea99316","Type":"ContainerStarted","Data":"5abbb26e1cf9e239e674772ffbb9cb64e9d80fe698e3dd896fbc52d86e5cbbd7"} Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.502427 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" event={"ID":"04d501b9-571e-4c12-b963-fbe770a27710","Type":"ContainerStarted","Data":"f310a25fc22b53d160fa0bb9da3d9fe5c775679331ede9001f3782c550805af1"} Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.939764 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.941224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.942286 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.943589 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.940274 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.940580 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.940716 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-8s5qc" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.956635 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.956896 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:29 crc kubenswrapper[4958]: I1206 05:30:29.956946 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:29 crc kubenswrapper[4958]: E1206 05:30:29.958047 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.458029134 +0000 UTC m=+140.991799897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:29 crc kubenswrapper[4958]: W1206 05:30:29.994647 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf5ca2592_609c_489b_bbbb_51d8535f8e68.slice/crio-72e1f1cd16da0d86cce4c7c1840dd594b3c676624da0726798e094c27480ce6a WatchSource:0}: Error finding container 72e1f1cd16da0d86cce4c7c1840dd594b3c676624da0726798e094c27480ce6a: Status 404 returned error can't find the container with id 72e1f1cd16da0d86cce4c7c1840dd594b3c676624da0726798e094c27480ce6a Dec 06 05:30:29 crc kubenswrapper[4958]: W1206 05:30:29.995806 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabbb6238_233b_43b5_a7cc_a142ff3fce2a.slice/crio-1c96c04575b30dd765b5744284f2329f5218daa17c3e12f4ec3e5c2e19d6f0c0 WatchSource:0}: Error finding container 1c96c04575b30dd765b5744284f2329f5218daa17c3e12f4ec3e5c2e19d6f0c0: Status 404 returned error can't find the container with id 1c96c04575b30dd765b5744284f2329f5218daa17c3e12f4ec3e5c2e19d6f0c0 Dec 06 05:30:30 crc kubenswrapper[4958]: W1206 05:30:30.011692 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbae936cb_2b16_4e6a_b2b1_bc185483cd8f.slice/crio-d74575eef3116372cd1bc6b13beb18db2a6b7a8a77ff2dbb0a463577d20ab8d1 WatchSource:0}: Error finding container d74575eef3116372cd1bc6b13beb18db2a6b7a8a77ff2dbb0a463577d20ab8d1: Status 404 returned error can't find the container with id d74575eef3116372cd1bc6b13beb18db2a6b7a8a77ff2dbb0a463577d20ab8d1 Dec 06 05:30:30 crc kubenswrapper[4958]: W1206 05:30:30.013416 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1095edf8_32a1_431c_9f73_e8738668d563.slice/crio-9cda4e28fa961ef86c213c8182dc5defe540e1c0ffe939e050e5e38cccb54afb WatchSource:0}: Error finding container 9cda4e28fa961ef86c213c8182dc5defe540e1c0ffe939e050e5e38cccb54afb: Status 404 returned error can't find the container with id 9cda4e28fa961ef86c213c8182dc5defe540e1c0ffe939e050e5e38cccb54afb Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.058444 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.058579 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.558559298 +0000 UTC m=+141.092330061 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.058805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-stats-auth\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.058888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d4b2a93-8098-465b-936e-9fb43c59a27c-proxy-tls\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.058976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-webhook-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059076 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059102 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-apiservice-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059204 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-metrics-certs\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059281 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d85fz\" (UniqueName: \"kubernetes.io/projected/739f10d8-9484-4a8f-92d2-aa5154384ed1-kube-api-access-d85fz\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059494 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aabd2e32-537f-4998-b642-f170ace9bb6a-metrics-tls\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-service-ca-bundle\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-client\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059680 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfcf9f9-314e-4815-bf14-c514914cb314-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-srv-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.059942 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e69d0e64-5612-4db6-a1b4-baec54322829-tmpfs\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjqj6\" (UniqueName: \"kubernetes.io/projected/e69d0e64-5612-4db6-a1b4-baec54322829-kube-api-access-bjqj6\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw6gj\" (UniqueName: \"kubernetes.io/projected/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-kube-api-access-dw6gj\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060114 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-serving-cert\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5grc\" (UniqueName: \"kubernetes.io/projected/9d4b2a93-8098-465b-936e-9fb43c59a27c-kube-api-access-j5grc\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060425 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tk8t\" (UniqueName: \"kubernetes.io/projected/aabd2e32-537f-4998-b642-f170ace9bb6a-kube-api-access-6tk8t\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcf9f9-314e-4815-bf14-c514914cb314-config\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060465 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-config\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-default-certificate\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8d6j\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nxft\" (UniqueName: \"kubernetes.io/projected/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-kube-api-access-2nxft\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.060639 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abfcf9f9-314e-4815-bf14-c514914cb314-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.062599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.062798 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.062881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-service-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.064378 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.064481 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-images\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.064666 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.564655786 +0000 UTC m=+141.098426539 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.066663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.074851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.165961 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166164 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-stats-auth\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-csi-data-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166252 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d4b2a93-8098-465b-936e-9fb43c59a27c-proxy-tls\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-webhook-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166322 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-registration-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166357 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-mountpoint-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166374 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7699s\" (UniqueName: \"kubernetes.io/projected/687e4231-7d9c-4f35-a38d-f806e9842a0b-kube-api-access-7699s\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166388 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ea57ba-83e5-4bf4-a781-7f6289063163-config-volume\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166406 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dpgd\" (UniqueName: \"kubernetes.io/projected/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-kube-api-access-9dpgd\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-srv-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166530 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hdc9\" (UniqueName: \"kubernetes.io/projected/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-kube-api-access-2hdc9\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-apiservice-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166593 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86gjv\" (UniqueName: \"kubernetes.io/projected/cde23f5c-5986-4797-9ec6-ff23122d36b8-kube-api-access-86gjv\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-metrics-certs\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166683 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166709 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d85fz\" (UniqueName: \"kubernetes.io/projected/739f10d8-9484-4a8f-92d2-aa5154384ed1-kube-api-access-d85fz\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166724 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ea57ba-83e5-4bf4-a781-7f6289063163-metrics-tls\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166743 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mnf\" (UniqueName: \"kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166762 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-node-bootstrap-token\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166787 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp2cj\" (UniqueName: \"kubernetes.io/projected/75ea57ba-83e5-4bf4-a781-7f6289063163-kube-api-access-sp2cj\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166805 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aabd2e32-537f-4998-b642-f170ace9bb6a-metrics-tls\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166840 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166890 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-service-ca-bundle\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166906 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-key\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-cabundle\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-client\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.166980 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfcf9f9-314e-4815-bf14-c514914cb314-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167023 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-srv-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-certs\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167055 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e69d0e64-5612-4db6-a1b4-baec54322829-tmpfs\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjqj6\" (UniqueName: \"kubernetes.io/projected/e69d0e64-5612-4db6-a1b4-baec54322829-kube-api-access-bjqj6\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw6gj\" (UniqueName: \"kubernetes.io/projected/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-kube-api-access-dw6gj\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167154 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-serving-cert\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167171 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167187 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5grc\" (UniqueName: \"kubernetes.io/projected/9d4b2a93-8098-465b-936e-9fb43c59a27c-kube-api-access-j5grc\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167202 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tk8t\" (UniqueName: \"kubernetes.io/projected/aabd2e32-537f-4998-b642-f170ace9bb6a-kube-api-access-6tk8t\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167218 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcf9f9-314e-4815-bf14-c514914cb314-config\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167232 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-socket-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167312 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cde23f5c-5986-4797-9ec6-ff23122d36b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167333 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-config\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-default-certificate\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8d6j\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nxft\" (UniqueName: \"kubernetes.io/projected/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-kube-api-access-2nxft\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abfcf9f9-314e-4815-bf14-c514914cb314-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-service-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-images\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167531 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-plugins-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.167564 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qdjx\" (UniqueName: \"kubernetes.io/projected/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-kube-api-access-8qdjx\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.167754 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.667737627 +0000 UTC m=+141.201508390 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.173417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.174451 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e69d0e64-5612-4db6-a1b4-baec54322829-tmpfs\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.174692 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-stats-auth\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.175085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-images\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.175671 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/9d4b2a93-8098-465b-936e-9fb43c59a27c-auth-proxy-config\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.177087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-srv-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.177134 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-service-ca-bundle\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.178328 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-profile-collector-cert\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.178630 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-metrics-certs\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.179465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/aabd2e32-537f-4998-b642-f170ace9bb6a-metrics-tls\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.179573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/9d4b2a93-8098-465b-936e-9fb43c59a27c-proxy-tls\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.179803 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-serving-cert\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.181431 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/abfcf9f9-314e-4815-bf14-c514914cb314-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.181604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-default-certificate\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.181763 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-client\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.183991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5grc\" (UniqueName: \"kubernetes.io/projected/9d4b2a93-8098-465b-936e-9fb43c59a27c-kube-api-access-j5grc\") pod \"machine-config-operator-74547568cd-ctv4r\" (UID: \"9d4b2a93-8098-465b-936e-9fb43c59a27c\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.185285 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tk8t\" (UniqueName: \"kubernetes.io/projected/aabd2e32-537f-4998-b642-f170ace9bb6a-kube-api-access-6tk8t\") pod \"dns-operator-744455d44c-5sxpr\" (UID: \"aabd2e32-537f-4998-b642-f170ace9bb6a\") " pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.224234 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-v8c65"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.227408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nxft\" (UniqueName: \"kubernetes.io/projected/b1e02fb0-74f3-43a8-bb6a-cc59018a50c4-kube-api-access-2nxft\") pod \"catalog-operator-68c6474976-5z682\" (UID: \"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.245906 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/abfcf9f9-314e-4815-bf14-c514914cb314-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.261924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjqj6\" (UniqueName: \"kubernetes.io/projected/e69d0e64-5612-4db6-a1b4-baec54322829-kube-api-access-bjqj6\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cde23f5c-5986-4797-9ec6-ff23122d36b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-plugins-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qdjx\" (UniqueName: \"kubernetes.io/projected/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-kube-api-access-8qdjx\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-csi-data-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269629 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-registration-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269668 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-mountpoint-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7699s\" (UniqueName: \"kubernetes.io/projected/687e4231-7d9c-4f35-a38d-f806e9842a0b-kube-api-access-7699s\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269708 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ea57ba-83e5-4bf4-a781-7f6289063163-config-volume\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269726 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dpgd\" (UniqueName: \"kubernetes.io/projected/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-kube-api-access-9dpgd\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269755 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-srv-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269777 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hdc9\" (UniqueName: \"kubernetes.io/projected/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-kube-api-access-2hdc9\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-86gjv\" (UniqueName: \"kubernetes.io/projected/cde23f5c-5986-4797-9ec6-ff23122d36b8-kube-api-access-86gjv\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269846 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ea57ba-83e5-4bf4-a781-7f6289063163-metrics-tls\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269866 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mnf\" (UniqueName: \"kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-node-bootstrap-token\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.269904 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sp2cj\" (UniqueName: \"kubernetes.io/projected/75ea57ba-83e5-4bf4-a781-7f6289063163-kube-api-access-sp2cj\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-key\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270400 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-cabundle\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270432 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-certs\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270611 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-socket-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-mountpoint-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.270712 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-csi-data-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.271141 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75ea57ba-83e5-4bf4-a781-7f6289063163-config-volume\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.271492 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.771444994 +0000 UTC m=+141.305215747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.271634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.272149 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-registration-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.272165 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-socket-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.272213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-plugins-dir\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.272414 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-cabundle\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.275253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-srv-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.275711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.277137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/cde23f5c-5986-4797-9ec6-ff23122d36b8-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.277904 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-certs\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.278113 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-node-bootstrap-token\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.278736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-profile-collector-cert\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.279227 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/687e4231-7d9c-4f35-a38d-f806e9842a0b-signing-key\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.292433 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-thm8z"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.302508 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.313944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-config\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.314075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.317209 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-webhook-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.319081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw6gj\" (UniqueName: \"kubernetes.io/projected/fc16acb8-14a0-4b1d-ba72-9a53f2bdb622-kube-api-access-dw6gj\") pod \"router-default-5444994796-ph6g5\" (UID: \"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622\") " pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.322446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.324884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.325229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/739f10d8-9484-4a8f-92d2-aa5154384ed1-etcd-service-ca\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.325835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/abfcf9f9-314e-4815-bf14-c514914cb314-config\") pod \"kube-apiserver-operator-766d6c64bb-9tpxq\" (UID: \"abfcf9f9-314e-4815-bf14-c514914cb314\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.327573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ea57ba-83e5-4bf4-a781-7f6289063163-metrics-tls\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.335251 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e69d0e64-5612-4db6-a1b4-baec54322829-apiservice-cert\") pod \"packageserver-d55dfcdfc-7l9wx\" (UID: \"e69d0e64-5612-4db6-a1b4-baec54322829\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.339716 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8d6j\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.340886 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.344192 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d85fz\" (UniqueName: \"kubernetes.io/projected/739f10d8-9484-4a8f-92d2-aa5154384ed1-kube-api-access-d85fz\") pod \"etcd-operator-b45778765-56k5h\" (UID: \"739f10d8-9484-4a8f-92d2-aa5154384ed1\") " pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.355876 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.356091 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.371784 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.376567 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.8728295 +0000 UTC m=+141.406600313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.388876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-86gjv\" (UniqueName: \"kubernetes.io/projected/cde23f5c-5986-4797-9ec6-ff23122d36b8-kube-api-access-86gjv\") pod \"package-server-manager-789f6589d5-czz58\" (UID: \"cde23f5c-5986-4797-9ec6-ff23122d36b8\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.391886 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.403060 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7699s\" (UniqueName: \"kubernetes.io/projected/687e4231-7d9c-4f35-a38d-f806e9842a0b-kube-api-access-7699s\") pod \"service-ca-9c57cc56f-fnjd6\" (UID: \"687e4231-7d9c-4f35-a38d-f806e9842a0b\") " pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.413318 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.414329 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.420838 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.430835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mnf\" (UniqueName: \"kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf\") pod \"collect-profiles-29416650-s7hr4\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.438681 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.450079 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dpgd\" (UniqueName: \"kubernetes.io/projected/286d0839-8f8b-4b30-81bc-a1a6ee4738d5-kube-api-access-9dpgd\") pod \"olm-operator-6b444d44fb-w4gt4\" (UID: \"286d0839-8f8b-4b30-81bc-a1a6ee4738d5\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.462178 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.472043 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hdc9\" (UniqueName: \"kubernetes.io/projected/2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8-kube-api-access-2hdc9\") pod \"csi-hostpathplugin-wk6vl\" (UID: \"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8\") " pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.472788 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.473076 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:30.973064536 +0000 UTC m=+141.506835289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.473354 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.492487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qdjx\" (UniqueName: \"kubernetes.io/projected/6ff2ee34-3042-4de9-ad1d-12ea4184a52f-kube-api-access-8qdjx\") pod \"machine-config-server-zckk4\" (UID: \"6ff2ee34-3042-4de9-ad1d-12ea4184a52f\") " pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.495287 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.496732 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.508386 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sp2cj\" (UniqueName: \"kubernetes.io/projected/75ea57ba-83e5-4bf4-a781-7f6289063163-kube-api-access-sp2cj\") pod \"dns-default-zhvgf\" (UID: \"75ea57ba-83e5-4bf4-a781-7f6289063163\") " pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.522052 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7g42z" event={"ID":"bae936cb-2b16-4e6a-b2b1-bc185483cd8f","Type":"ContainerStarted","Data":"d74575eef3116372cd1bc6b13beb18db2a6b7a8a77ff2dbb0a463577d20ab8d1"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.529717 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" event={"ID":"e6d69346-a75e-4edc-b2d8-ae6c7f612c62","Type":"ContainerStarted","Data":"46aa3b0f45f9b195451b41212c8bbdb508c3278b9ba359da34a4311d146648ec"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.537589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" event={"ID":"d123f533-79f0-4797-acae-bb101594ea67","Type":"ContainerStarted","Data":"2d3a7e23a881d9c4614a5be044864ec81fddf953091b8983033c75894b6ddba3"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.540408 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" event={"ID":"8edfe72a-ade0-4c45-9bff-9366b7e53c54","Type":"ContainerStarted","Data":"19803b2299cbd0717c0c7a97080b7e46de2044fa054e55f8ec6ef21f3737d9cc"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.544825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" event={"ID":"05fda290-e73b-468e-b494-6cd912e3cbd8","Type":"ContainerStarted","Data":"fc06a7b6567207c4b6186340e546615f3f272820b9f3150cb0da40e883f59c01"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.558464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" event={"ID":"abbb6238-233b-43b5-a7cc-a142ff3fce2a","Type":"ContainerStarted","Data":"1c96c04575b30dd765b5744284f2329f5218daa17c3e12f4ec3e5c2e19d6f0c0"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.562005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" event={"ID":"f5ca2592-609c-489b-bbbb-51d8535f8e68","Type":"ContainerStarted","Data":"72e1f1cd16da0d86cce4c7c1840dd594b3c676624da0726798e094c27480ce6a"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.570251 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" event={"ID":"1095edf8-32a1-431c-9f73-e8738668d563","Type":"ContainerStarted","Data":"9cda4e28fa961ef86c213c8182dc5defe540e1c0ffe939e050e5e38cccb54afb"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.573405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.573542 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.073522739 +0000 UTC m=+141.607293502 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.573588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.573950 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.073937339 +0000 UTC m=+141.607708102 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.574714 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" event={"ID":"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0","Type":"ContainerStarted","Data":"c6fdbc056333b4cb93520410f35491bf138c8a003a4d7a5a36d0be033a40982e"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.598392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" event={"ID":"d0b1738e-3696-4786-b978-4dee25dde9ac","Type":"ContainerStarted","Data":"bc769fd2fd609f573ab54f4a110be6d1fe8e4daa1ac9d5f8ab5ac9b0541d310a"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.603880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" event={"ID":"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2","Type":"ContainerStarted","Data":"07fdc938ceb486f03e86fe023670ad6b457041157798bccd2583ef0e7e77eb36"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.605846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" event={"ID":"651c6106-fa9c-43ea-b9c2-68b77e1652c8","Type":"ContainerStarted","Data":"2f03140abe1d5e3d572d918fc2c74495a941ea9a58d99bb957652c937e0b63f9"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.607140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" event={"ID":"978a529c-5f37-4f03-92b4-1ef8084e5917","Type":"ContainerStarted","Data":"1fe5a698763194e4280b3ff399b8057d368b3382f3caa994f5b369e53725dcd6"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.609340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" event={"ID":"8c2597de-216f-46fb-996c-0f0c2598cacc","Type":"ContainerStarted","Data":"e9e176da09f2f2d3f3f1d662b8bb0fd5daa28d622620f88671b0c812ba9e5288"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.610961 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" event={"ID":"46dbd477-d07a-4732-a9e3-08e1d49385c3","Type":"ContainerStarted","Data":"52bfc112ffabc3de0d373ba5dfb11805774cd6f51913df93a9eeb4cea64aa3f7"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.612383 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" event={"ID":"56e39030-25e1-4a9e-97e3-d84e988ec0da","Type":"ContainerStarted","Data":"565b4f635ce2773c778da20fad368f56cd001fd86c4733aa1c5f7f6874b07842"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.612692 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.614186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" event={"ID":"04d501b9-571e-4c12-b963-fbe770a27710","Type":"ContainerStarted","Data":"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0"} Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.614448 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.616158 4958 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-5hcwh container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.616198 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" podUID="04d501b9-571e-4c12-b963-fbe770a27710" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.636153 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.674517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.674803 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.174788861 +0000 UTC m=+141.708559624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.685546 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" podStartSLOduration=121.685526602 podStartE2EDuration="2m1.685526602s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:30.684880575 +0000 UTC m=+141.218651338" watchObservedRunningTime="2025-12-06 05:30:30.685526602 +0000 UTC m=+141.219297375" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.721041 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" podStartSLOduration=120.721024918 podStartE2EDuration="2m0.721024918s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:30.719707934 +0000 UTC m=+141.253478697" watchObservedRunningTime="2025-12-06 05:30:30.721024918 +0000 UTC m=+141.254795681" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.745877 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.755343 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-zckk4" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.775779 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.777987 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.277963795 +0000 UTC m=+141.811734688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.778076 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.793099 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-8s5qc"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.816178 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.877336 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.877729 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.377671317 +0000 UTC m=+141.911442080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.880441 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-wndrl"] Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.937465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.978577 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:30 crc kubenswrapper[4958]: E1206 05:30:30.978971 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.47895516 +0000 UTC m=+142.012725923 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:30 crc kubenswrapper[4958]: I1206 05:30:30.987457 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.001640 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.007368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-79gtn"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.009392 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.079256 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.079960 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.579943946 +0000 UTC m=+142.113714709 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: W1206 05:30:31.100339 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4baa5071_6d1a_4771_ab23_d68db9f231a3.slice/crio-830d40585eb118e2967065da2accaff20369244649e2a12645db0f41ac58c38d WatchSource:0}: Error finding container 830d40585eb118e2967065da2accaff20369244649e2a12645db0f41ac58c38d: Status 404 returned error can't find the container with id 830d40585eb118e2967065da2accaff20369244649e2a12645db0f41ac58c38d Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.125848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.126797 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.181659 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.181978 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.681966369 +0000 UTC m=+142.215737132 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.212618 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fnjd6"] Dec 06 05:30:31 crc kubenswrapper[4958]: W1206 05:30:31.260656 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81369d65_ae42_43fc_a2e6_dbf61d9a86d7.slice/crio-e3ef91269e6e78eaa826a4b0574f03959ce6b641f6dbf0550d8d4f6748b9beb8 WatchSource:0}: Error finding container e3ef91269e6e78eaa826a4b0574f03959ce6b641f6dbf0550d8d4f6748b9beb8: Status 404 returned error can't find the container with id e3ef91269e6e78eaa826a4b0574f03959ce6b641f6dbf0550d8d4f6748b9beb8 Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.285091 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.285312 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.785296116 +0000 UTC m=+142.319066879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.337218 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-5sxpr"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.387753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.388572 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.888555241 +0000 UTC m=+142.422326004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.399221 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.491558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.491961 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:31.99194633 +0000 UTC m=+142.525717093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.552100 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-wk6vl"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.592929 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.593456 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.093444028 +0000 UTC m=+142.627214791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.697889 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.698359 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.198321746 +0000 UTC m=+142.732092519 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.717368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" event={"ID":"e6734918-c336-4932-b571-12cab28ef213","Type":"ContainerStarted","Data":"bc85fbb6a2e531b5cd5e6353cb1b05d76fa459d11a4f3d6bf2f916aca9490931"} Dec 06 05:30:31 crc kubenswrapper[4958]: W1206 05:30:31.719057 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2a2c1115_6ea4_4e7d_91dc_82ab9147d9c8.slice/crio-4566eb68a41fca8cd8bdfc9a556a8953ad5e7998cc36ec4b612d2d426888f10c WatchSource:0}: Error finding container 4566eb68a41fca8cd8bdfc9a556a8953ad5e7998cc36ec4b612d2d426888f10c: Status 404 returned error can't find the container with id 4566eb68a41fca8cd8bdfc9a556a8953ad5e7998cc36ec4b612d2d426888f10c Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.730681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" event={"ID":"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0","Type":"ContainerStarted","Data":"2b06ddfa77e8c8298dd5afb40b78bed350801090ecc9af051aeca790ce7e86db"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.734381 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" event={"ID":"05fda290-e73b-468e-b494-6cd912e3cbd8","Type":"ContainerStarted","Data":"0050edb44eb49ee3b5a06ce4923bf3be16d1ca8fe82249102962b3bb1ad293bc"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.735742 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.738538 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-w8bt8 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" start-of-body= Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.738647 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.40:8080/healthz\": dial tcp 10.217.0.40:8080: connect: connection refused" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.739148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" event={"ID":"5435d984-69a2-4441-a63d-fde03d6a7081","Type":"ContainerStarted","Data":"d18dfbe72a067d0ac30490e20a5c65a4cb1d3c0d80f87343a023ca4d5d4fcc88"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.744323 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8s5qc" event={"ID":"4baa5071-6d1a-4771-ab23-d68db9f231a3","Type":"ContainerStarted","Data":"830d40585eb118e2967065da2accaff20369244649e2a12645db0f41ac58c38d"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.753049 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" event={"ID":"687e4231-7d9c-4f35-a38d-f806e9842a0b","Type":"ContainerStarted","Data":"ec54163224b07972b631d8779843340a976e62ce9d14dbb18dab10fcbbd81129"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.754285 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" podStartSLOduration=122.754274516 podStartE2EDuration="2m2.754274516s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.753715442 +0000 UTC m=+142.287486205" watchObservedRunningTime="2025-12-06 05:30:31.754274516 +0000 UTC m=+142.288045279" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.758728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" event={"ID":"e904c67d-5ffa-4e8f-96d6-be8c569a22db","Type":"ContainerStarted","Data":"97a1425bd13e5bacff823a53ff5b4034b18adeafd505975b29d5084f4a2e143d"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.786140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" event={"ID":"3131ee86-c122-498a-872b-eb20260b6639","Type":"ContainerStarted","Data":"bb3c22955fd2c946f0b0622a81bcba67475067b77aabe7cc4d006cee47c6726a"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.786742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" event={"ID":"8edfe72a-ade0-4c45-9bff-9366b7e53c54","Type":"ContainerStarted","Data":"bb628e9022c443e3e1e30454ce1173dc9c6ad01254f0b31624e316eba5413881"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.790313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" event={"ID":"46dbd477-d07a-4732-a9e3-08e1d49385c3","Type":"ContainerStarted","Data":"b3cd53838b2fcc163e01a49f5c42240733527110b4e1afea48314e2e215c54d0"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.799961 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.800310 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.300295317 +0000 UTC m=+142.834066080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.838192 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-jjjhf" podStartSLOduration=122.838176016 podStartE2EDuration="2m2.838176016s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.824987581 +0000 UTC m=+142.358758344" watchObservedRunningTime="2025-12-06 05:30:31.838176016 +0000 UTC m=+142.371946779" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.858183 4958 generic.go:334] "Generic (PLEG): container finished" podID="651c6106-fa9c-43ea-b9c2-68b77e1652c8" containerID="d7055873dbb4dd0d0b16ba6d5f0b41f55f07d8ef1e0e74c78feaa10215c3f570" exitCode=0 Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.860640 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" event={"ID":"651c6106-fa9c-43ea-b9c2-68b77e1652c8","Type":"ContainerDied","Data":"d7055873dbb4dd0d0b16ba6d5f0b41f55f07d8ef1e0e74c78feaa10215c3f570"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.861899 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ph6g5" event={"ID":"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622","Type":"ContainerStarted","Data":"26817a61f3498cb86ad54ea5bbe5e753dda2c0d3db070887dc25d340bad73829"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.872727 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" event={"ID":"4be188a0-8e11-40a8-b38e-d3d5475c982b","Type":"ContainerStarted","Data":"d207eae8591c623f2b74d0530dd432ac878cac57e150059485a645bf05ba30db"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.892520 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" event={"ID":"e6d69346-a75e-4edc-b2d8-ae6c7f612c62","Type":"ContainerStarted","Data":"cbf6e580a0610eaa8eda0280204e6ce359cab169301f308e197f1238f9dacc36"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.902945 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:31 crc kubenswrapper[4958]: E1206 05:30:31.904265 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.40424824 +0000 UTC m=+142.938019003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.909946 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" event={"ID":"d0b1738e-3696-4786-b978-4dee25dde9ac","Type":"ContainerStarted","Data":"a403b9fe25c3e88da636f2f20c487f8ce65b984de6de1eb2cba0b90ac1147ac4"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.914262 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-s7svh" podStartSLOduration=122.914245251 podStartE2EDuration="2m2.914245251s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.9134276 +0000 UTC m=+142.447198373" watchObservedRunningTime="2025-12-06 05:30:31.914245251 +0000 UTC m=+142.448016014" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.914297 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682"] Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.922916 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" event={"ID":"9d4b2a93-8098-465b-936e-9fb43c59a27c","Type":"ContainerStarted","Data":"f127da0f795b18223fa363788b43d2a337fabf42e3fd4f7f01966db6af56ec04"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.929395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" event={"ID":"1095edf8-32a1-431c-9f73-e8738668d563","Type":"ContainerStarted","Data":"4a3986abb5168398e2be4bef1e76fc06c15a9fcebc5d77c16f2c02a0b0bf5d17"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.939441 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-rhck7" podStartSLOduration=122.939417338 podStartE2EDuration="2m2.939417338s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.935970309 +0000 UTC m=+142.469741072" watchObservedRunningTime="2025-12-06 05:30:31.939417338 +0000 UTC m=+142.473188101" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.942364 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" event={"ID":"abbb6238-233b-43b5-a7cc-a142ff3fce2a","Type":"ContainerStarted","Data":"ca8b43d1fcd3633238121b9503d5307e17dc23fb3df1693d2effa59688b456f5"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.944188 4958 generic.go:334] "Generic (PLEG): container finished" podID="f5ca2592-609c-489b-bbbb-51d8535f8e68" containerID="4ca2ff352a2d2bbc69e3f3972b2d896fc06e22af15a24c3c15cc177caad7b351" exitCode=0 Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.944441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" event={"ID":"f5ca2592-609c-489b-bbbb-51d8535f8e68","Type":"ContainerDied","Data":"4ca2ff352a2d2bbc69e3f3972b2d896fc06e22af15a24c3c15cc177caad7b351"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.947680 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" event={"ID":"ade65bd8-3f30-4239-b717-a9912ea99316","Type":"ContainerStarted","Data":"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.949673 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.958680 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" podStartSLOduration=122.958662411 podStartE2EDuration="2m2.958662411s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.957556512 +0000 UTC m=+142.491327275" watchObservedRunningTime="2025-12-06 05:30:31.958662411 +0000 UTC m=+142.492433174" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.959090 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" event={"ID":"b06c782b-7cfd-4d0d-9f40-590645abde2f","Type":"ContainerStarted","Data":"4cd3051f1b336a2898febb969112d9dfecb5fbb89c9407fc4108e22594f4c362"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.977120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7g42z" event={"ID":"bae936cb-2b16-4e6a-b2b1-bc185483cd8f","Type":"ContainerStarted","Data":"eb0fe0e98006b13ee7a4a508f1cac29237344772e7558957199f37cf6352de69"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.988951 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.997977 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" event={"ID":"aabd2e32-537f-4998-b642-f170ace9bb6a","Type":"ContainerStarted","Data":"49473a42fba69959af91166473e8a6897f4d8461090604a37bfcfa9cbfc84694"} Dec 06 05:30:31 crc kubenswrapper[4958]: I1206 05:30:31.998334 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" podStartSLOduration=122.998309315 podStartE2EDuration="2m2.998309315s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:31.994806744 +0000 UTC m=+142.528577507" watchObservedRunningTime="2025-12-06 05:30:31.998309315 +0000 UTC m=+142.532080078" Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.185533 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-79gtn" event={"ID":"9ba34fce-2bfd-4b37-8b67-f2c936cc1b44","Type":"ContainerStarted","Data":"ad78a90004d94d44f7b919e9d7523cb173616979b779ef60a9f732ae7a635bfd"} Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.192072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.193756 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.693737896 +0000 UTC m=+143.227508649 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.212174 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.259942 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.281708 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-x755r" podStartSLOduration=123.281680552 podStartE2EDuration="2m3.281680552s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:32.208500381 +0000 UTC m=+142.742271144" watchObservedRunningTime="2025-12-06 05:30:32.281680552 +0000 UTC m=+142.815451305" Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.282953 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-7g42z" podStartSLOduration=123.282946765 podStartE2EDuration="2m3.282946765s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:32.254105472 +0000 UTC m=+142.787876235" watchObservedRunningTime="2025-12-06 05:30:32.282946765 +0000 UTC m=+142.816717528" Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.296081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.297975 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.797926385 +0000 UTC m=+143.331697148 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.301022 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.319308 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" event={"ID":"81369d65-ae42-43fc-a2e6-dbf61d9a86d7","Type":"ContainerStarted","Data":"e3ef91269e6e78eaa826a4b0574f03959ce6b641f6dbf0550d8d4f6748b9beb8"} Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.332445 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.362386 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" podStartSLOduration=123.362369247 podStartE2EDuration="2m3.362369247s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:32.332641411 +0000 UTC m=+142.866412174" watchObservedRunningTime="2025-12-06 05:30:32.362369247 +0000 UTC m=+142.896140010" Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.398937 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.399428 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:32.899410694 +0000 UTC m=+143.433181457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.408506 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-zhvgf"] Dec 06 05:30:32 crc kubenswrapper[4958]: W1206 05:30:32.464508 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcde23f5c_5986_4797_9ec6_ff23122d36b8.slice/crio-1291414941eac0a3b2cc534987a99587349cddce51408318e9307b07d6990972 WatchSource:0}: Error finding container 1291414941eac0a3b2cc534987a99587349cddce51408318e9307b07d6990972: Status 404 returned error can't find the container with id 1291414941eac0a3b2cc534987a99587349cddce51408318e9307b07d6990972 Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.486195 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-56k5h"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.499747 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.501678 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.001636602 +0000 UTC m=+143.535407365 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.502598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.510558 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.010535074 +0000 UTC m=+143.544305837 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.514445 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.528759 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4"] Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.603178 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.618946 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.118905442 +0000 UTC m=+143.652676205 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: W1206 05:30:32.662608 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod40faedf1_f03f_4c51_8577_f11f34488d09.slice/crio-e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7 WatchSource:0}: Error finding container e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7: Status 404 returned error can't find the container with id e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7 Dec 06 05:30:32 crc kubenswrapper[4958]: W1206 05:30:32.662835 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod739f10d8_9484_4a8f_92d2_aa5154384ed1.slice/crio-3d71d3d3e051e8ee3ce1050a3e7195dd9e70e25a6c4df9636983305358fd72da WatchSource:0}: Error finding container 3d71d3d3e051e8ee3ce1050a3e7195dd9e70e25a6c4df9636983305358fd72da: Status 404 returned error can't find the container with id 3d71d3d3e051e8ee3ce1050a3e7195dd9e70e25a6c4df9636983305358fd72da Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.704793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.705137 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.205119983 +0000 UTC m=+143.738890736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.805343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.805657 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.305639917 +0000 UTC m=+143.839410670 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:32 crc kubenswrapper[4958]: I1206 05:30:32.906912 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:32 crc kubenswrapper[4958]: E1206 05:30:32.907512 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.407455084 +0000 UTC m=+143.941225857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.008275 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.008398 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.508373118 +0000 UTC m=+144.042143881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.008462 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.008727 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.508718777 +0000 UTC m=+144.042489540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.109230 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.109568 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.609553179 +0000 UTC m=+144.143323942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.211444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.211786 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.711773267 +0000 UTC m=+144.245544040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.315472 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.315881 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.815866753 +0000 UTC m=+144.349637516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.389144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" event={"ID":"e904c67d-5ffa-4e8f-96d6-be8c569a22db","Type":"ContainerStarted","Data":"49598ceac3e363349ed802f9b4e1aed2a20470e9f62edded5063660b56f5afc3"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.398123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" event={"ID":"8edfe72a-ade0-4c45-9bff-9366b7e53c54","Type":"ContainerStarted","Data":"4411324a6dc0f10183ed53dd03389401d07c5a51a63e6c4d7ac29d88665d7f10"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.432936 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.433356 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:33.93334429 +0000 UTC m=+144.467115043 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.449983 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" event={"ID":"abfcf9f9-314e-4815-bf14-c514914cb314","Type":"ContainerStarted","Data":"6ac42e9ccddc42f12218d4958611bea4dcd13fbfd9a6119714cd38e8033576ec"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.451323 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-f47jc" podStartSLOduration=124.451309499 podStartE2EDuration="2m4.451309499s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.449384129 +0000 UTC m=+143.983154892" watchObservedRunningTime="2025-12-06 05:30:33.451309499 +0000 UTC m=+143.985080262" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.463231 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" event={"ID":"687e4231-7d9c-4f35-a38d-f806e9842a0b","Type":"ContainerStarted","Data":"c9f9f4c2291684d17473e3dfac88745faa2bd55e113b451a9104a2357e9f755e"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.501394 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" event={"ID":"e69d0e64-5612-4db6-a1b4-baec54322829","Type":"ContainerStarted","Data":"60b4ed04576c2275b92f5c9a4a11876b1dc061b6f1b8db9516d950e1269bde64"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.518287 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fnjd6" podStartSLOduration=123.518271057 podStartE2EDuration="2m3.518271057s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.513372468 +0000 UTC m=+144.047143231" watchObservedRunningTime="2025-12-06 05:30:33.518271057 +0000 UTC m=+144.052041820" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.520033 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" event={"ID":"aabd2e32-537f-4998-b642-f170ace9bb6a","Type":"ContainerStarted","Data":"e0e8e5c091620b030b220f37fc61e900e15624a55f56529e0d55ef5cf6f4a130"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.538374 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.539330 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.039308236 +0000 UTC m=+144.573078999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.540746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" event={"ID":"286d0839-8f8b-4b30-81bc-a1a6ee4738d5","Type":"ContainerStarted","Data":"d83923d9bfaf669f1a63f9fe75dd49b239d2a943e9c3d3d3a3732d9c1aeedade"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.542314 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhvgf" event={"ID":"75ea57ba-83e5-4bf4-a781-7f6289063163","Type":"ContainerStarted","Data":"7a2a771d8f22c62fb543d534c8b32a358580180224c80b65cba65863318bfb6c"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.638450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" event={"ID":"d123f533-79f0-4797-acae-bb101594ea67","Type":"ContainerStarted","Data":"5b6459e55cac4689eb2ca9c0f25736984089cec1eeab9aeb5f1fa6c8359c033a"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.638531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" event={"ID":"d123f533-79f0-4797-acae-bb101594ea67","Type":"ContainerStarted","Data":"345dc1e646a7519a358353dbd0f06cd82ce183255824a486907a4518b21a4e4f"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.642160 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.643272 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.143259078 +0000 UTC m=+144.677029841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.690835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" event={"ID":"81369d65-ae42-43fc-a2e6-dbf61d9a86d7","Type":"ContainerStarted","Data":"40cbf42a7feeba5a9c53ae72b20d6ec2ff6791ca7876497d0983276eefd30fed"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.715918 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-mw66x" podStartSLOduration=124.715900785 podStartE2EDuration="2m4.715900785s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.700462112 +0000 UTC m=+144.234232875" watchObservedRunningTime="2025-12-06 05:30:33.715900785 +0000 UTC m=+144.249671548" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.742853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.743849 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.243831264 +0000 UTC m=+144.777602027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.766901 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-c5v8n" podStartSLOduration=124.766888565 podStartE2EDuration="2m4.766888565s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.765815518 +0000 UTC m=+144.299586281" watchObservedRunningTime="2025-12-06 05:30:33.766888565 +0000 UTC m=+144.300659328" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.845142 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.845692 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.345681702 +0000 UTC m=+144.879452465 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.851452 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zckk4" event={"ID":"6ff2ee34-3042-4de9-ad1d-12ea4184a52f","Type":"ContainerStarted","Data":"438da38cc3e05c7bff03efa3b16f6744e84a9d9da3caa048a4e2a3f626d328da"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.851507 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-zckk4" event={"ID":"6ff2ee34-3042-4de9-ad1d-12ea4184a52f","Type":"ContainerStarted","Data":"2b5ee22f6a18e5910afe1c19cee65860dae5101b4d2744efc0ed72ea6e0436d4"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.878281 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-zckk4" podStartSLOduration=8.878265552 podStartE2EDuration="8.878265552s" podCreationTimestamp="2025-12-06 05:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.877893353 +0000 UTC m=+144.411664106" watchObservedRunningTime="2025-12-06 05:30:33.878265552 +0000 UTC m=+144.412036305" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.884748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-8s5qc" event={"ID":"4baa5071-6d1a-4771-ab23-d68db9f231a3","Type":"ContainerStarted","Data":"0ced8d1eb9e4e9aee27a0e53338a73363a0c12809d59a9594e9b0106fa3668fe"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.909261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" event={"ID":"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8","Type":"ContainerStarted","Data":"4566eb68a41fca8cd8bdfc9a556a8953ad5e7998cc36ec4b612d2d426888f10c"} Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.942767 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-8s5qc" podStartSLOduration=8.942751365 podStartE2EDuration="8.942751365s" podCreationTimestamp="2025-12-06 05:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:33.942324295 +0000 UTC m=+144.476095058" watchObservedRunningTime="2025-12-06 05:30:33.942751365 +0000 UTC m=+144.476522128" Dec 06 05:30:33 crc kubenswrapper[4958]: I1206 05:30:33.946081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:33 crc kubenswrapper[4958]: E1206 05:30:33.946420 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.446405931 +0000 UTC m=+144.980176694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.021001 4958 generic.go:334] "Generic (PLEG): container finished" podID="8c2597de-216f-46fb-996c-0f0c2598cacc" containerID="ad0bc8276c53d9427937fbbf4cd3d8683655514ad1caa40904f1c494a256a223" exitCode=0 Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.021204 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" event={"ID":"8c2597de-216f-46fb-996c-0f0c2598cacc","Type":"ContainerDied","Data":"ad0bc8276c53d9427937fbbf4cd3d8683655514ad1caa40904f1c494a256a223"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.047126 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.052917 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.552899261 +0000 UTC m=+145.086670024 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.062562 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" event={"ID":"f5ca2592-609c-489b-bbbb-51d8535f8e68","Type":"ContainerStarted","Data":"a07a5fc7743f483888f429846a455e464904a122f1a3fece3fe99e98dfd38967"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.114413 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" event={"ID":"978a529c-5f37-4f03-92b4-1ef8084e5917","Type":"ContainerStarted","Data":"70870bcecc5a6439c8d9f6769e78ec01a5c6a996c9dc066bd7ede73f06d87422"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.127319 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" podStartSLOduration=125.127300952 podStartE2EDuration="2m5.127300952s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.125179927 +0000 UTC m=+144.658950690" watchObservedRunningTime="2025-12-06 05:30:34.127300952 +0000 UTC m=+144.661071715" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.144590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" event={"ID":"cde23f5c-5986-4797-9ec6-ff23122d36b8","Type":"ContainerStarted","Data":"1291414941eac0a3b2cc534987a99587349cddce51408318e9307b07d6990972"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.147958 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.149268 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.649247645 +0000 UTC m=+145.183018408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.168558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" event={"ID":"9d4b2a93-8098-465b-936e-9fb43c59a27c","Type":"ContainerStarted","Data":"ffafc744255287372c09d950fbeff17f314ad6f0725c5a3786fceac44bd0be93"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.183003 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-qdd6q" podStartSLOduration=125.182989536 podStartE2EDuration="2m5.182989536s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.181085356 +0000 UTC m=+144.714856109" watchObservedRunningTime="2025-12-06 05:30:34.182989536 +0000 UTC m=+144.716760289" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.184414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" event={"ID":"739f10d8-9484-4a8f-92d2-aa5154384ed1","Type":"ContainerStarted","Data":"3d71d3d3e051e8ee3ce1050a3e7195dd9e70e25a6c4df9636983305358fd72da"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.197813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" event={"ID":"e6734918-c336-4932-b571-12cab28ef213","Type":"ContainerStarted","Data":"04e0e5e1a9c85e8ca6c84f8741b7aa40537b6531f3a2f80cb21c99d84d7e20d2"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.244021 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" event={"ID":"40faedf1-f03f-4c51-8577-f11f34488d09","Type":"ContainerStarted","Data":"e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.246931 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" podStartSLOduration=125.246917104 podStartE2EDuration="2m5.246917104s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.245168009 +0000 UTC m=+144.778938792" watchObservedRunningTime="2025-12-06 05:30:34.246917104 +0000 UTC m=+144.780687867" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.250434 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.252303 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.752288384 +0000 UTC m=+145.286059147 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.258865 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" event={"ID":"3131ee86-c122-498a-872b-eb20260b6639","Type":"ContainerStarted","Data":"b7c28a2b63842bd4df4a2247272684456a71b5f2ec94329e153adcad4c7cfbfc"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.295172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" event={"ID":"fb5a2f1e-9a90-41bf-9a9d-7cd181140ae0","Type":"ContainerStarted","Data":"c3e9426f9fdba176faf39d8eb332cfd26d035271e86f40439a50f0d0fc99208a"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.338879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" event={"ID":"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4","Type":"ContainerStarted","Data":"827d8a606200ecf851a766f9bac75c7ec5560795b8904f729796cd2bb3c85d84"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.340130 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.342565 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" podStartSLOduration=34.342554901 podStartE2EDuration="34.342554901s" podCreationTimestamp="2025-12-06 05:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.340498447 +0000 UTC m=+144.874269210" watchObservedRunningTime="2025-12-06 05:30:34.342554901 +0000 UTC m=+144.876325664" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.343066 4958 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-5z682 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.343126 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" podUID="b1e02fb0-74f3-43a8-bb6a-cc59018a50c4" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.31:8443/healthz\": dial tcp 10.217.0.31:8443: connect: connection refused" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.364085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.382755 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.882728039 +0000 UTC m=+145.416498802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.392803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" event={"ID":"4be188a0-8e11-40a8-b38e-d3d5475c982b","Type":"ContainerStarted","Data":"278d7105950a7a3a9123cbc349d13bbe85b887471b08af49e7b2caa92235421c"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.423520 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" podStartSLOduration=125.423467492 podStartE2EDuration="2m5.423467492s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.381337012 +0000 UTC m=+144.915107765" watchObservedRunningTime="2025-12-06 05:30:34.423467492 +0000 UTC m=+144.957238255" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.424334 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" podStartSLOduration=125.424329385 podStartE2EDuration="2m5.424329385s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.422758894 +0000 UTC m=+144.956529657" watchObservedRunningTime="2025-12-06 05:30:34.424329385 +0000 UTC m=+144.958100148" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.461753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-79gtn" event={"ID":"9ba34fce-2bfd-4b37-8b67-f2c936cc1b44","Type":"ContainerStarted","Data":"f5a9cccbfc76de7a4da144cafd8971a50f02f037a7e2ad3dbb9c371fd2a4552c"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.462237 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.473110 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-79gtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.473146 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-79gtn" podUID="9ba34fce-2bfd-4b37-8b67-f2c936cc1b44" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.475153 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" event={"ID":"b06c782b-7cfd-4d0d-9f40-590645abde2f","Type":"ContainerStarted","Data":"f0c1eef8bf9e07ffc88cb2623d5a9f7a51edea17b76245907c251a981b549b39"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.485489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.485793 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:34.985780169 +0000 UTC m=+145.519550932 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.487089 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-nfgl4" podStartSLOduration=125.487074562 podStartE2EDuration="2m5.487074562s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.486172159 +0000 UTC m=+145.019942912" watchObservedRunningTime="2025-12-06 05:30:34.487074562 +0000 UTC m=+145.020845325" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.514872 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mmpcs" podStartSLOduration=125.514847978 podStartE2EDuration="2m5.514847978s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.514219841 +0000 UTC m=+145.047990604" watchObservedRunningTime="2025-12-06 05:30:34.514847978 +0000 UTC m=+145.048618731" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.529454 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.555045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-v8c65" event={"ID":"ed3eb5df-37e1-4849-96f5-5dea9dad7ee2","Type":"ContainerStarted","Data":"8f4e41c380e71c022611121304647b636d546e2d87a00432cd46a88b68c81e51"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.579061 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" event={"ID":"5435d984-69a2-4441-a63d-fde03d6a7081","Type":"ContainerStarted","Data":"75c8da4bae93422c362b9d4ff6ae9f61e3e883e987def9585286502353d37aa2"} Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.580588 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.586420 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.587582 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.087566866 +0000 UTC m=+145.621337619 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.606957 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-79gtn" podStartSLOduration=125.60693899 podStartE2EDuration="2m5.60693899s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.564322669 +0000 UTC m=+145.098093442" watchObservedRunningTime="2025-12-06 05:30:34.60693899 +0000 UTC m=+145.140709753" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.608107 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gm494" podStartSLOduration=125.608099621 podStartE2EDuration="2m5.608099621s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.606289714 +0000 UTC m=+145.140060467" watchObservedRunningTime="2025-12-06 05:30:34.608099621 +0000 UTC m=+145.141870394" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.609354 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.614812 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-2hl9j" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.649423 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-kbgkk" podStartSLOduration=124.649405969 podStartE2EDuration="2m4.649405969s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.648548817 +0000 UTC m=+145.182319580" watchObservedRunningTime="2025-12-06 05:30:34.649405969 +0000 UTC m=+145.183176732" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.693503 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.698590 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.198567682 +0000 UTC m=+145.732338655 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.711011 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" podStartSLOduration=125.710984027 podStartE2EDuration="2m5.710984027s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.697089404 +0000 UTC m=+145.230860157" watchObservedRunningTime="2025-12-06 05:30:34.710984027 +0000 UTC m=+145.244754780" Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.794751 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.795560 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.295544653 +0000 UTC m=+145.829315416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:34 crc kubenswrapper[4958]: I1206 05:30:34.900331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:34 crc kubenswrapper[4958]: E1206 05:30:34.900694 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.400682287 +0000 UTC m=+145.934453040 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.003039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.003333 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.503319796 +0000 UTC m=+146.037090559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.106574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.106946 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.60692982 +0000 UTC m=+146.140700583 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.208123 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.208345 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.708285976 +0000 UTC m=+146.242056749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.208911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.209201 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.70918841 +0000 UTC m=+146.242959173 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.310181 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.310615 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.810599486 +0000 UTC m=+146.344370249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.412080 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.412621 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:35.912603709 +0000 UTC m=+146.446374472 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.513593 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.513730 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.013703507 +0000 UTC m=+146.547474270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.513862 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.514193 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.01418385 +0000 UTC m=+146.547954613 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.584524 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" event={"ID":"40faedf1-f03f-4c51-8577-f11f34488d09","Type":"ContainerStarted","Data":"989c48790bf8935d614fc84cf222bf2437aea7bbfbc4742f34be129298cfa81d"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.586105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" event={"ID":"cde23f5c-5986-4797-9ec6-ff23122d36b8","Type":"ContainerStarted","Data":"d75cad15affac4872d4efda31254eed0ca25d6895556ed8b855c5bf81d744fe1"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.586131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" event={"ID":"cde23f5c-5986-4797-9ec6-ff23122d36b8","Type":"ContainerStarted","Data":"c5f6cc2edff7179344592e528437e2dc4cc2ebce0b6799952b4c80909bceb221"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.586297 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.587720 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-ctv4r" event={"ID":"9d4b2a93-8098-465b-936e-9fb43c59a27c","Type":"ContainerStarted","Data":"8c3f3d3540643d8dbd50ce57db6ae378bff22b3d0a49a8726d54f24d3f44a8f0"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.589291 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" event={"ID":"b1e02fb0-74f3-43a8-bb6a-cc59018a50c4","Type":"ContainerStarted","Data":"64d0a3453fa5f9fe79c341b74b089cb70ef2febf6ebe988e2a2269b84602fd40"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.590581 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" event={"ID":"abfcf9f9-314e-4815-bf14-c514914cb314","Type":"ContainerStarted","Data":"9837ae48ec876656d02c7184fe40e31811bd80972c4c32fed0d8aa97cabadd89"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.592748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" event={"ID":"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8","Type":"ContainerStarted","Data":"6a0701f1ba7b9f71b8bcbe23be205dd54ae4898218029f553b51bdee21fd9059"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.593399 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-5z682" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.594157 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" event={"ID":"e904c67d-5ffa-4e8f-96d6-be8c569a22db","Type":"ContainerStarted","Data":"33a0cecee3075fd9bf50f5b3ef6024ec28696206c7698095b37aa8e24b26f6fe"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.595758 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" event={"ID":"aabd2e32-537f-4998-b642-f170ace9bb6a","Type":"ContainerStarted","Data":"3a0771a362e919bc43da01a3a0b1605976d81d845eaeec50cbe26246502d81a5"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.597253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" event={"ID":"739f10d8-9484-4a8f-92d2-aa5154384ed1","Type":"ContainerStarted","Data":"09eef2eb18d5d94381e06dc7c137c84d2d7d86944ad49dff76ef7f7a91af0121"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.598825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" event={"ID":"286d0839-8f8b-4b30-81bc-a1a6ee4738d5","Type":"ContainerStarted","Data":"94b8d2cdff4a3d8d3ad5c037b7aff2e6cb8b511463258057a40f211eea2d4b37"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.599401 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.600949 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" event={"ID":"651c6106-fa9c-43ea-b9c2-68b77e1652c8","Type":"ContainerStarted","Data":"275f8d472e136b1efc843e13365cc252248d038a5311d09f51f2d8afcafa46c2"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.603664 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-h5znl" event={"ID":"e6734918-c336-4932-b571-12cab28ef213","Type":"ContainerStarted","Data":"eab3437cefb28e55fd96328bdc624d740acb6156b137a26294f7b46812571505"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.605588 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.609004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-ph6g5" event={"ID":"fc16acb8-14a0-4b1d-ba72-9a53f2bdb622","Type":"ContainerStarted","Data":"5fb186007bf2cbd2b06ecbbacc80fd7ebfc0647bb20ad00f581c7c35a377c4a5"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.610021 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-dmrfv" podStartSLOduration=126.610009711 podStartE2EDuration="2m6.610009711s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:34.834425338 +0000 UTC m=+145.368196101" watchObservedRunningTime="2025-12-06 05:30:35.610009711 +0000 UTC m=+146.143780474" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.610619 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" podStartSLOduration=125.610615377 podStartE2EDuration="2m5.610615377s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.608785399 +0000 UTC m=+146.142556152" watchObservedRunningTime="2025-12-06 05:30:35.610615377 +0000 UTC m=+146.144386140" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.617061 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.617513 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.117498496 +0000 UTC m=+146.651269249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.622667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" event={"ID":"e69d0e64-5612-4db6-a1b4-baec54322829","Type":"ContainerStarted","Data":"4f372c6c873778b73b8b79597e43bdb786d9aee66f52c6810833e2daa8d0ed34"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.623134 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.630152 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" event={"ID":"8c2597de-216f-46fb-996c-0f0c2598cacc","Type":"ContainerStarted","Data":"47549ac89d718b9ab181a57ee9a74e59e88bf8c04d9cd223d870edd61e966b99"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.630195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" event={"ID":"8c2597de-216f-46fb-996c-0f0c2598cacc","Type":"ContainerStarted","Data":"c11babc657df6d338a8b8a91abf99a6d30ac13563554309cd313a5257db89570"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.630666 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-56k5h" podStartSLOduration=126.63064888 podStartE2EDuration="2m6.63064888s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.628584536 +0000 UTC m=+146.162355299" watchObservedRunningTime="2025-12-06 05:30:35.63064888 +0000 UTC m=+146.164419643" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.633351 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhvgf" event={"ID":"75ea57ba-83e5-4bf4-a781-7f6289063163","Type":"ContainerStarted","Data":"08d80ac18de1bc25e16d6c3c1d7b1fa8ed388f0da4ddeda80b171f90a2e7a208"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.633385 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.633395 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-zhvgf" event={"ID":"75ea57ba-83e5-4bf4-a781-7f6289063163","Type":"ContainerStarted","Data":"35e8bdff2b859e05c55644ebd19a7a0440bd3edd6f3321940ed55df0436c1f86"} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.634164 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-79gtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.634199 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-79gtn" podUID="9ba34fce-2bfd-4b37-8b67-f2c936cc1b44" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.639965 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.682157 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-9tpxq" podStartSLOduration=126.682137713 podStartE2EDuration="2m6.682137713s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.68202164 +0000 UTC m=+146.215792403" watchObservedRunningTime="2025-12-06 05:30:35.682137713 +0000 UTC m=+146.215908466" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.693509 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-wndrl" podStartSLOduration=126.693487359 podStartE2EDuration="2m6.693487359s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.660358785 +0000 UTC m=+146.194129548" watchObservedRunningTime="2025-12-06 05:30:35.693487359 +0000 UTC m=+146.227258122" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.718833 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.721309 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.221278825 +0000 UTC m=+146.755049588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.739440 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-5sxpr" podStartSLOduration=126.739424159 podStartE2EDuration="2m6.739424159s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.738435162 +0000 UTC m=+146.272205925" watchObservedRunningTime="2025-12-06 05:30:35.739424159 +0000 UTC m=+146.273194922" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.763800 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-w4gt4" podStartSLOduration=126.763785375 podStartE2EDuration="2m6.763785375s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.760192041 +0000 UTC m=+146.293962804" watchObservedRunningTime="2025-12-06 05:30:35.763785375 +0000 UTC m=+146.297556138" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.790912 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-zhvgf" podStartSLOduration=10.790897782 podStartE2EDuration="10.790897782s" podCreationTimestamp="2025-12-06 05:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.790531123 +0000 UTC m=+146.324301886" watchObservedRunningTime="2025-12-06 05:30:35.790897782 +0000 UTC m=+146.324668545" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.821367 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.821601 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.321574063 +0000 UTC m=+146.855344826 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.822325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.827345 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.327319853 +0000 UTC m=+146.861090616 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-rzshs" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.831413 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7l9wx" podStartSLOduration=125.831389499 podStartE2EDuration="2m5.831389499s" podCreationTimestamp="2025-12-06 05:28:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.828256028 +0000 UTC m=+146.362026791" watchObservedRunningTime="2025-12-06 05:30:35.831389499 +0000 UTC m=+146.365160282" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.925224 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" podStartSLOduration=126.925202738 podStartE2EDuration="2m6.925202738s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.883288514 +0000 UTC m=+146.417059277" watchObservedRunningTime="2025-12-06 05:30:35.925202738 +0000 UTC m=+146.458973501" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.926249 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:35 crc kubenswrapper[4958]: E1206 05:30:35.926617 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-06 05:30:36.426600624 +0000 UTC m=+146.960371387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.937926 4958 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.947824 4958 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-06T05:30:35.937958441Z","Handler":null,"Name":""} Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.965771 4958 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 06 05:30:35 crc kubenswrapper[4958]: I1206 05:30:35.965808 4958 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.027829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.031025 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.031058 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.079734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-rzshs\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.129116 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.171330 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.191438 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.275600 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-ph6g5" podStartSLOduration=127.275582002 podStartE2EDuration="2m7.275582002s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:35.925837764 +0000 UTC m=+146.459608527" watchObservedRunningTime="2025-12-06 05:30:36.275582002 +0000 UTC m=+146.809352765" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.278259 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.279180 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.280754 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.291769 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.392843 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.396190 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:36 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:36 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:36 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.396271 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.432966 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.433028 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7jrl\" (UniqueName: \"kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.433054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.508210 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.509233 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.512709 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.519837 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.534460 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7jrl\" (UniqueName: \"kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.534536 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.534700 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.535690 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.536531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.584852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7jrl\" (UniqueName: \"kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl\") pod \"certified-operators-gkp2b\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.597999 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.637909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6qtp\" (UniqueName: \"kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.638221 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.638260 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.671217 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.672273 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.683456 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" event={"ID":"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8","Type":"ContainerStarted","Data":"6bbeec0f732fc462d24ba7adfa73956a3580b80eb180859a648f33fb8bdd7ec2"} Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.683502 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" event={"ID":"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8","Type":"ContainerStarted","Data":"b44f2e014c673e14af99d6c7bf839af0535f4e16981e2885856ebfd99106082c"} Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.701560 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.705778 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qp66v" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.738987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.739064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.739112 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6qtp\" (UniqueName: \"kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.739771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.739862 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.773697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6qtp\" (UniqueName: \"kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp\") pod \"community-operators-qzfg8\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.840959 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.841077 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.841189 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.841535 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.874767 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.877089 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:36 crc kubenswrapper[4958]: W1206 05:30:36.895096 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4c40d5e_9b15_4c80_9a23_5047a6dc887c.slice/crio-52202e4f6b1a9509319195c8a2d0b85b9813e86a5c7df2b31aeea719ac27dabf WatchSource:0}: Error finding container 52202e4f6b1a9509319195c8a2d0b85b9813e86a5c7df2b31aeea719ac27dabf: Status 404 returned error can't find the container with id 52202e4f6b1a9509319195c8a2d0b85b9813e86a5c7df2b31aeea719ac27dabf Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.897360 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.909178 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.947874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.948219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.948249 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.949167 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.949383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:36 crc kubenswrapper[4958]: I1206 05:30:36.978019 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz\") pod \"certified-operators-wcwnn\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.051104 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.051155 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxqkb\" (UniqueName: \"kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.051306 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.059972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.119525 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.153756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.153811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxqkb\" (UniqueName: \"kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.153868 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.154176 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.154313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.179988 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxqkb\" (UniqueName: \"kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb\") pod \"community-operators-zhltk\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.254644 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.254681 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.254730 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.255008 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.257738 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.260609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.262202 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.268579 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.291643 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.332235 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.377892 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.388072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.404983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.406337 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.406388 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:37 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:37 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:37 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.406425 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:37 crc kubenswrapper[4958]: W1206 05:30:37.432626 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod215a3047_0656_45e9_aa87_9f4987ed3b83.slice/crio-306b9c681d5bfbb179f27bfc4adf49bc3fe36202e190d4fced4e69c511e67a22 WatchSource:0}: Error finding container 306b9c681d5bfbb179f27bfc4adf49bc3fe36202e190d4fced4e69c511e67a22: Status 404 returned error can't find the container with id 306b9c681d5bfbb179f27bfc4adf49bc3fe36202e190d4fced4e69c511e67a22 Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.586996 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.705242 4958 generic.go:334] "Generic (PLEG): container finished" podID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerID="3b41560435ad11e36ffe4205890b4f0487a9481518c821e651ee7faec5d303a4" exitCode=0 Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.705342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerDied","Data":"3b41560435ad11e36ffe4205890b4f0487a9481518c821e651ee7faec5d303a4"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.705391 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerStarted","Data":"306b9c681d5bfbb179f27bfc4adf49bc3fe36202e190d4fced4e69c511e67a22"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.707452 4958 generic.go:334] "Generic (PLEG): container finished" podID="40faedf1-f03f-4c51-8577-f11f34488d09" containerID="989c48790bf8935d614fc84cf222bf2437aea7bbfbc4742f34be129298cfa81d" exitCode=0 Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.707556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" event={"ID":"40faedf1-f03f-4c51-8577-f11f34488d09","Type":"ContainerDied","Data":"989c48790bf8935d614fc84cf222bf2437aea7bbfbc4742f34be129298cfa81d"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.708613 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.710856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" event={"ID":"a4c40d5e-9b15-4c80-9a23-5047a6dc887c","Type":"ContainerStarted","Data":"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.710904 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" event={"ID":"a4c40d5e-9b15-4c80-9a23-5047a6dc887c","Type":"ContainerStarted","Data":"52202e4f6b1a9509319195c8a2d0b85b9813e86a5c7df2b31aeea719ac27dabf"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.711633 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.717243 4958 generic.go:334] "Generic (PLEG): container finished" podID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerID="926d856043a7f9a145c6634917017def3d9089bd665840dcafd43a5dd388dc93" exitCode=0 Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.717351 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerDied","Data":"926d856043a7f9a145c6634917017def3d9089bd665840dcafd43a5dd388dc93"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.717380 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerStarted","Data":"6db3c7dba0493262550bf8db53336fb292a846532e5d70dbb484e6d236d12a64"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.719734 4958 generic.go:334] "Generic (PLEG): container finished" podID="71c8096c-9091-428a-a142-185855892fb9" containerID="ef1f4a3f095b158659e5795d961e45acda517a1d97c59be068ab195640db00a0" exitCode=0 Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.719783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerDied","Data":"ef1f4a3f095b158659e5795d961e45acda517a1d97c59be068ab195640db00a0"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.719813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerStarted","Data":"946250caac0d46e0fdcbb0a6d7688a73211a81c4b60ae344b79ab08e0e7bdc01"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.740995 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" event={"ID":"2a2c1115-6ea4-4e7d-91dc-82ab9147d9c8","Type":"ContainerStarted","Data":"8b6cc8c9bc4ca00fb6d76ff88889d093962732cc7d0df64e51b32a33b8d77dc0"} Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.772483 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" podStartSLOduration=128.772451881 podStartE2EDuration="2m8.772451881s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:37.767248374 +0000 UTC m=+148.301019137" watchObservedRunningTime="2025-12-06 05:30:37.772451881 +0000 UTC m=+148.306222634" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.794125 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 06 05:30:37 crc kubenswrapper[4958]: I1206 05:30:37.830387 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-wk6vl" podStartSLOduration=11.830369243 podStartE2EDuration="11.830369243s" podCreationTimestamp="2025-12-06 05:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:37.827901838 +0000 UTC m=+148.361672601" watchObservedRunningTime="2025-12-06 05:30:37.830369243 +0000 UTC m=+148.364140006" Dec 06 05:30:37 crc kubenswrapper[4958]: W1206 05:30:37.839614 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-27b017d81d2ff99a31e96a11e48cc7b97fdd765bd65ed4287f3a6706469e5936 WatchSource:0}: Error finding container 27b017d81d2ff99a31e96a11e48cc7b97fdd765bd65ed4287f3a6706469e5936: Status 404 returned error can't find the container with id 27b017d81d2ff99a31e96a11e48cc7b97fdd765bd65ed4287f3a6706469e5936 Dec 06 05:30:38 crc kubenswrapper[4958]: W1206 05:30:38.096087 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-e245240ddaf218e06246a680a937ba6a81689ed8e96df958e7b22ecffe9de94b WatchSource:0}: Error finding container e245240ddaf218e06246a680a937ba6a81689ed8e96df958e7b22ecffe9de94b: Status 404 returned error can't find the container with id e245240ddaf218e06246a680a937ba6a81689ed8e96df958e7b22ecffe9de94b Dec 06 05:30:38 crc kubenswrapper[4958]: W1206 05:30:38.098851 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-53d3250422d1c8b74b0dc6e6d77063973e3b998d7445e5c3c46cfe57a6b81653 WatchSource:0}: Error finding container 53d3250422d1c8b74b0dc6e6d77063973e3b998d7445e5c3c46cfe57a6b81653: Status 404 returned error can't find the container with id 53d3250422d1c8b74b0dc6e6d77063973e3b998d7445e5c3c46cfe57a6b81653 Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.261158 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.262672 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.267228 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.273032 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.344419 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.344462 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.351633 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.397104 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:38 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:38 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:38 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.397188 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.402093 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.402239 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.402336 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxp8\" (UniqueName: \"kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.503420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.503561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxp8\" (UniqueName: \"kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.503657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.504917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.505305 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.509409 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.509874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.510521 4958 patch_prober.go:28] interesting pod/console-f9d7485db-7g42z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.510572 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-7g42z" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.530786 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxp8\" (UniqueName: \"kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8\") pod \"redhat-marketplace-ggxxn\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.610582 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.611327 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.620664 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.661436 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.662607 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.673628 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.677407 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-79gtn container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.677454 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-79gtn" podUID="9ba34fce-2bfd-4b37-8b67-f2c936cc1b44" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.678575 4958 patch_prober.go:28] interesting pod/downloads-7954f5f757-79gtn container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" start-of-body= Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.678604 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-79gtn" podUID="9ba34fce-2bfd-4b37-8b67-f2c936cc1b44" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.17:8080/\": dial tcp 10.217.0.17:8080: connect: connection refused" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.775615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3e709d8e6ab8bc88d4315d97adb274d71e05426fc0b36b25fd7c9b823c0ca3e0"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.775945 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"27b017d81d2ff99a31e96a11e48cc7b97fdd765bd65ed4287f3a6706469e5936"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.780602 4958 generic.go:334] "Generic (PLEG): container finished" podID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerID="c670ed7f65d6dc5dd7acb1f065fef47613602ce26e0a578a20ba5837089230bc" exitCode=0 Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.780773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerDied","Data":"c670ed7f65d6dc5dd7acb1f065fef47613602ce26e0a578a20ba5837089230bc"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.780848 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerStarted","Data":"794a6f0f339f675a20d8d57b7b4fba7dbd4b8fd75e4ce7f5f7ba0c9b18b4445b"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.783129 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.783565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"69ef99348c179bd6b20e70ee58e09b1e402dec087d22941382d8da357911f1bb"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.783599 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e245240ddaf218e06246a680a937ba6a81689ed8e96df958e7b22ecffe9de94b"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.783972 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.786715 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"dbe080764b92ae10f495767e4de0c278c67f85ccd0f9cfff871f17c463598421"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.786747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"53d3250422d1c8b74b0dc6e6d77063973e3b998d7445e5c3c46cfe57a6b81653"} Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.794367 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-thm8z" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.794630 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-dnlff" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.817096 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhllh\" (UniqueName: \"kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.817158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.817281 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.920502 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhllh\" (UniqueName: \"kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.920587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.920954 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.924252 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.926727 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.949186 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.955977 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.971382 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.972378 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.972537 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 05:30:38 crc kubenswrapper[4958]: I1206 05:30:38.973552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhllh\" (UniqueName: \"kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh\") pod \"redhat-marketplace-t5crs\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.018889 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.124640 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.125086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.169045 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.180439 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:30:39 crc kubenswrapper[4958]: W1206 05:30:39.221847 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97d8e3a5_e7e8_46c8_8346_1fddba1a0b6a.slice/crio-60a65314bd573661c2c4ba44c56bb074ab8b48e46655d8566844f2c83e008e35 WatchSource:0}: Error finding container 60a65314bd573661c2c4ba44c56bb074ab8b48e46655d8566844f2c83e008e35: Status 404 returned error can't find the container with id 60a65314bd573661c2c4ba44c56bb074ab8b48e46655d8566844f2c83e008e35 Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.226321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.226355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.226437 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.253633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.300621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.327927 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume\") pod \"40faedf1-f03f-4c51-8577-f11f34488d09\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.327972 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume\") pod \"40faedf1-f03f-4c51-8577-f11f34488d09\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.327995 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mnf\" (UniqueName: \"kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf\") pod \"40faedf1-f03f-4c51-8577-f11f34488d09\" (UID: \"40faedf1-f03f-4c51-8577-f11f34488d09\") " Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.329185 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume" (OuterVolumeSpecName: "config-volume") pod "40faedf1-f03f-4c51-8577-f11f34488d09" (UID: "40faedf1-f03f-4c51-8577-f11f34488d09"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.337093 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "40faedf1-f03f-4c51-8577-f11f34488d09" (UID: "40faedf1-f03f-4c51-8577-f11f34488d09"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.338612 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf" (OuterVolumeSpecName: "kube-api-access-27mnf") pod "40faedf1-f03f-4c51-8577-f11f34488d09" (UID: "40faedf1-f03f-4c51-8577-f11f34488d09"). InnerVolumeSpecName "kube-api-access-27mnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.402858 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:39 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:39 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:39 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.403152 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.433614 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/40faedf1-f03f-4c51-8577-f11f34488d09-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.433661 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40faedf1-f03f-4c51-8577-f11f34488d09-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.433674 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mnf\" (UniqueName: \"kubernetes.io/projected/40faedf1-f03f-4c51-8577-f11f34488d09-kube-api-access-27mnf\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.477633 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:30:39 crc kubenswrapper[4958]: E1206 05:30:39.477870 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40faedf1-f03f-4c51-8577-f11f34488d09" containerName="collect-profiles" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.477885 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="40faedf1-f03f-4c51-8577-f11f34488d09" containerName="collect-profiles" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.478013 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="40faedf1-f03f-4c51-8577-f11f34488d09" containerName="collect-profiles" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.478862 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.484300 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.498801 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.499006 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.636911 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.636980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6zz7\" (UniqueName: \"kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.637003 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.723778 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.737638 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.737692 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6zz7\" (UniqueName: \"kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.737712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.739119 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.745678 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.766909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6zz7\" (UniqueName: \"kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7\") pod \"redhat-operators-jrnjs\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.813295 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.824342 4958 generic.go:334] "Generic (PLEG): container finished" podID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerID="632266ab8baacf57971acf5250bcb24dd2c7b86daef3d52b02f171d85a45fbcf" exitCode=0 Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.824443 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerDied","Data":"632266ab8baacf57971acf5250bcb24dd2c7b86daef3d52b02f171d85a45fbcf"} Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.824510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerStarted","Data":"60a65314bd573661c2c4ba44c56bb074ab8b48e46655d8566844f2c83e008e35"} Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.832306 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" event={"ID":"40faedf1-f03f-4c51-8577-f11f34488d09","Type":"ContainerDied","Data":"e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7"} Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.832353 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3d0230eee422577b5e2ee34f5928c0075c9d260602c52ad0653401794d2cda7" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.832448 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.838848 4958 generic.go:334] "Generic (PLEG): container finished" podID="16ec5793-681c-4935-a298-734c214e23c8" containerID="c04f2b7285ec390fa09ca001b464301e0c955e9ae63426ead7da10e23673b455" exitCode=0 Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.839640 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerDied","Data":"c04f2b7285ec390fa09ca001b464301e0c955e9ae63426ead7da10e23673b455"} Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.839662 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerStarted","Data":"9547d0c36844c903fe65685f02b5973cabb71815557953908f0325f551653146"} Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.858753 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.859838 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:39 crc kubenswrapper[4958]: I1206 05:30:39.868599 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.041001 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.041371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.041414 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64m4t\" (UniqueName: \"kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.123402 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.143427 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.143465 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.143506 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64m4t\" (UniqueName: \"kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.144032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.144067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.146107 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.168150 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64m4t\" (UniqueName: \"kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t\") pod \"redhat-operators-5f5s9\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.201886 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.392265 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.403831 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:40 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:40 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:40 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.403877 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.488867 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.849819 4958 generic.go:334] "Generic (PLEG): container finished" podID="3786f843-0226-4fdc-8511-62659463b3fb" containerID="d7a1f3dd79b691266ada54ad3285279e438da7ffb866f435a0d3f8a463f98d1a" exitCode=0 Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.849897 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerDied","Data":"d7a1f3dd79b691266ada54ad3285279e438da7ffb866f435a0d3f8a463f98d1a"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.850085 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerStarted","Data":"a09d11ee85ce2eae3703fcc61e164931ac19a604e96ab873ee288b195557599f"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.857218 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bd012fb-8716-4c81-a02e-628859510ea1","Type":"ContainerStarted","Data":"a2213c678b5f1dd553caf5ff0f879cb72d90817d09295ae919771ed35ccfd04b"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.857263 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bd012fb-8716-4c81-a02e-628859510ea1","Type":"ContainerStarted","Data":"0842802ce66a82469dc291ac9cdfcf20e011f15c6dba26939c88c45794460583"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.873096 4958 generic.go:334] "Generic (PLEG): container finished" podID="305608d0-0aae-4507-8058-dc7837555b6c" containerID="cf82f5b57f5a99790ab0292ecbda951e3cdac6e5d6621abf8d1f184df13a4519" exitCode=0 Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.873140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerDied","Data":"cf82f5b57f5a99790ab0292ecbda951e3cdac6e5d6621abf8d1f184df13a4519"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.873190 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerStarted","Data":"fce9978e746aa97d90110ec55980a640e5429567bc96136538ea0579df33504a"} Dec 06 05:30:40 crc kubenswrapper[4958]: I1206 05:30:40.893857 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.89383758 podStartE2EDuration="2.89383758s" podCreationTimestamp="2025-12-06 05:30:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:30:40.886874068 +0000 UTC m=+151.420644831" watchObservedRunningTime="2025-12-06 05:30:40.89383758 +0000 UTC m=+151.427608343" Dec 06 05:30:41 crc kubenswrapper[4958]: I1206 05:30:41.396168 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:41 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:41 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:41 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:41 crc kubenswrapper[4958]: I1206 05:30:41.396231 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:41 crc kubenswrapper[4958]: I1206 05:30:41.903954 4958 generic.go:334] "Generic (PLEG): container finished" podID="9bd012fb-8716-4c81-a02e-628859510ea1" containerID="a2213c678b5f1dd553caf5ff0f879cb72d90817d09295ae919771ed35ccfd04b" exitCode=0 Dec 06 05:30:41 crc kubenswrapper[4958]: I1206 05:30:41.904141 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bd012fb-8716-4c81-a02e-628859510ea1","Type":"ContainerDied","Data":"a2213c678b5f1dd553caf5ff0f879cb72d90817d09295ae919771ed35ccfd04b"} Dec 06 05:30:42 crc kubenswrapper[4958]: I1206 05:30:42.395101 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 06 05:30:42 crc kubenswrapper[4958]: [-]has-synced failed: reason withheld Dec 06 05:30:42 crc kubenswrapper[4958]: [+]process-running ok Dec 06 05:30:42 crc kubenswrapper[4958]: healthz check failed Dec 06 05:30:42 crc kubenswrapper[4958]: I1206 05:30:42.395164 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.245797 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.289003 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 05:30:43 crc kubenswrapper[4958]: E1206 05:30:43.289201 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9bd012fb-8716-4c81-a02e-628859510ea1" containerName="pruner" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.289211 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bd012fb-8716-4c81-a02e-628859510ea1" containerName="pruner" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.289315 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9bd012fb-8716-4c81-a02e-628859510ea1" containerName="pruner" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.291800 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.295623 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.295655 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.295693 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.389548 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir\") pod \"9bd012fb-8716-4c81-a02e-628859510ea1\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.389697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access\") pod \"9bd012fb-8716-4c81-a02e-628859510ea1\" (UID: \"9bd012fb-8716-4c81-a02e-628859510ea1\") " Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.389888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.389921 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.390035 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9bd012fb-8716-4c81-a02e-628859510ea1" (UID: "9bd012fb-8716-4c81-a02e-628859510ea1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.396521 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9bd012fb-8716-4c81-a02e-628859510ea1" (UID: "9bd012fb-8716-4c81-a02e-628859510ea1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.401109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.404546 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-ph6g5" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.491655 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.491998 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.492114 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9bd012fb-8716-4c81-a02e-628859510ea1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.492132 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9bd012fb-8716-4c81-a02e-628859510ea1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.491832 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.509806 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.610743 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.960983 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.985503 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"9bd012fb-8716-4c81-a02e-628859510ea1","Type":"ContainerDied","Data":"0842802ce66a82469dc291ac9cdfcf20e011f15c6dba26939c88c45794460583"} Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.985560 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0842802ce66a82469dc291ac9cdfcf20e011f15c6dba26939c88c45794460583" Dec 06 05:30:43 crc kubenswrapper[4958]: I1206 05:30:43.985564 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 06 05:30:44 crc kubenswrapper[4958]: W1206 05:30:44.072300 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcd726e0b_17ef_42eb_8d8d_7da2d56d18e4.slice/crio-f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52 WatchSource:0}: Error finding container f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52: Status 404 returned error can't find the container with id f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52 Dec 06 05:30:45 crc kubenswrapper[4958]: I1206 05:30:45.009551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4","Type":"ContainerStarted","Data":"fba9e10647911660a25d3654ee86b883f72424bbe44998f605fdf9821fa3bdb8"} Dec 06 05:30:45 crc kubenswrapper[4958]: I1206 05:30:45.010455 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4","Type":"ContainerStarted","Data":"f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52"} Dec 06 05:30:45 crc kubenswrapper[4958]: I1206 05:30:45.789887 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-zhvgf" Dec 06 05:30:46 crc kubenswrapper[4958]: I1206 05:30:46.021849 4958 generic.go:334] "Generic (PLEG): container finished" podID="cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" containerID="fba9e10647911660a25d3654ee86b883f72424bbe44998f605fdf9821fa3bdb8" exitCode=0 Dec 06 05:30:46 crc kubenswrapper[4958]: I1206 05:30:46.021889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4","Type":"ContainerDied","Data":"fba9e10647911660a25d3654ee86b883f72424bbe44998f605fdf9821fa3bdb8"} Dec 06 05:30:48 crc kubenswrapper[4958]: I1206 05:30:48.513569 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:48 crc kubenswrapper[4958]: I1206 05:30:48.517797 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:30:48 crc kubenswrapper[4958]: I1206 05:30:48.695274 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-79gtn" Dec 06 05:30:51 crc kubenswrapper[4958]: I1206 05:30:51.315247 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:51 crc kubenswrapper[4958]: I1206 05:30:51.392876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2c09fca2-7d91-412a-9814-64370d35b3e9-metrics-certs\") pod \"network-metrics-daemon-kb98t\" (UID: \"2c09fca2-7d91-412a-9814-64370d35b3e9\") " pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:51 crc kubenswrapper[4958]: I1206 05:30:51.492835 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kb98t" Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.706188 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.802317 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir\") pod \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.802517 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access\") pod \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\" (UID: \"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4\") " Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.802516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" (UID: "cd726e0b-17ef-42eb-8d8d-7da2d56d18e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.802779 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.807791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" (UID: "cd726e0b-17ef-42eb-8d8d-7da2d56d18e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:30:55 crc kubenswrapper[4958]: I1206 05:30:55.904353 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cd726e0b-17ef-42eb-8d8d-7da2d56d18e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:30:56 crc kubenswrapper[4958]: I1206 05:30:56.077419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"cd726e0b-17ef-42eb-8d8d-7da2d56d18e4","Type":"ContainerDied","Data":"f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52"} Dec 06 05:30:56 crc kubenswrapper[4958]: I1206 05:30:56.077499 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f43202ee6a38fa818c413048b3331420687b9961bb6a1db26a1b0397d4a70a52" Dec 06 05:30:56 crc kubenswrapper[4958]: I1206 05:30:56.077499 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 06 05:30:56 crc kubenswrapper[4958]: I1206 05:30:56.180655 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:31:09 crc kubenswrapper[4958]: I1206 05:31:09.866509 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:31:09 crc kubenswrapper[4958]: I1206 05:31:09.867136 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:31:10 crc kubenswrapper[4958]: I1206 05:31:10.478750 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-czz58" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.090005 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 05:31:17 crc kubenswrapper[4958]: E1206 05:31:17.090754 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" containerName="pruner" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.090787 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" containerName="pruner" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.091028 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd726e0b-17ef-42eb-8d8d-7da2d56d18e4" containerName="pruner" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.091862 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.096330 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.096459 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.113549 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.214742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.214861 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.316214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.316363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.316383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.353310 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.410565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 06 05:31:17 crc kubenswrapper[4958]: I1206 05:31:17.415613 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.100639 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.102214 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.111648 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.292287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.292371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.292578 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.393570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.393692 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.393820 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.393870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.393877 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.425724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access\") pod \"installer-9-crc\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:22 crc kubenswrapper[4958]: I1206 05:31:22.723927 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:31:26 crc kubenswrapper[4958]: E1206 05:31:26.347413 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 05:31:26 crc kubenswrapper[4958]: E1206 05:31:26.347936 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q7jrl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gkp2b_openshift-marketplace(192a9281-a2ae-4251-aaf2-8d1f67d0321c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:26 crc kubenswrapper[4958]: E1206 05:31:26.349185 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gkp2b" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" Dec 06 05:31:27 crc kubenswrapper[4958]: E1206 05:31:27.458202 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gkp2b" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" Dec 06 05:31:27 crc kubenswrapper[4958]: E1206 05:31:27.516605 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 05:31:27 crc kubenswrapper[4958]: E1206 05:31:27.516847 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnxp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-ggxxn_openshift-marketplace(97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:27 crc kubenswrapper[4958]: E1206 05:31:27.519061 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-ggxxn" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.385355 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-ggxxn" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.484175 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.484328 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b2dvz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-wcwnn_openshift-marketplace(215a3047-0656-45e9-aa87-9f4987ed3b83): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.485500 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-wcwnn" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.517689 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.517875 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j6zz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jrnjs_openshift-marketplace(3786f843-0226-4fdc-8511-62659463b3fb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:30 crc kubenswrapper[4958]: E1206 05:31:30.519352 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jrnjs" podUID="3786f843-0226-4fdc-8511-62659463b3fb" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.730438 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jrnjs" podUID="3786f843-0226-4fdc-8511-62659463b3fb" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.730736 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-wcwnn" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.831870 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.832060 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cxqkb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zhltk_openshift-marketplace(d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.833240 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zhltk" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.840703 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.840831 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-64m4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-5f5s9_openshift-marketplace(305608d0-0aae-4507-8058-dc7837555b6c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.842142 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-5f5s9" podUID="305608d0-0aae-4507-8058-dc7837555b6c" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.903593 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.904021 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhllh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-t5crs_openshift-marketplace(16ec5793-681c-4935-a298-734c214e23c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.905249 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-t5crs" podUID="16ec5793-681c-4935-a298-734c214e23c8" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.913961 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.914060 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b6qtp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-qzfg8_openshift-marketplace(71c8096c-9091-428a-a142-185855892fb9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:31:31 crc kubenswrapper[4958]: E1206 05:31:31.915256 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-qzfg8" podUID="71c8096c-9091-428a-a142-185855892fb9" Dec 06 05:31:31 crc kubenswrapper[4958]: I1206 05:31:31.955979 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 06 05:31:31 crc kubenswrapper[4958]: I1206 05:31:31.987989 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kb98t"] Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.023676 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.345930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9615d4ca-94c6-43fe-a7e2-c5a34620275f","Type":"ContainerStarted","Data":"f42cb6d682ca0db358932a38c0e89d4d6eb4926582b1f2687915a71be83f7f2b"} Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.346406 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9615d4ca-94c6-43fe-a7e2-c5a34620275f","Type":"ContainerStarted","Data":"3bc5c2e9110da8fcb4ef3730ade277cd72c56bab0f5f7a57461fa1046a58c0a4"} Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.351735 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kb98t" event={"ID":"2c09fca2-7d91-412a-9814-64370d35b3e9","Type":"ContainerStarted","Data":"e290de023804faebd92175a59e6c1af7f3d395eae89054b3cea1dae4af3374e0"} Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.351772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kb98t" event={"ID":"2c09fca2-7d91-412a-9814-64370d35b3e9","Type":"ContainerStarted","Data":"953c5e0e26c813eb7d7a04ac5f46797c7682fe721e4a8464c0572aa10fca0e5d"} Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.353266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5a3d6dd5-04c5-427e-9655-052b17f8c9d2","Type":"ContainerStarted","Data":"4565d25d75df4d4bbc65389688a62bbeeca5f128a13b4f784063e516a5af9bfa"} Dec 06 05:31:32 crc kubenswrapper[4958]: E1206 05:31:32.366115 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zhltk" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" Dec 06 05:31:32 crc kubenswrapper[4958]: E1206 05:31:32.366456 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-t5crs" podUID="16ec5793-681c-4935-a298-734c214e23c8" Dec 06 05:31:32 crc kubenswrapper[4958]: E1206 05:31:32.366561 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-qzfg8" podUID="71c8096c-9091-428a-a142-185855892fb9" Dec 06 05:31:32 crc kubenswrapper[4958]: E1206 05:31:32.368853 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-5f5s9" podUID="305608d0-0aae-4507-8058-dc7837555b6c" Dec 06 05:31:32 crc kubenswrapper[4958]: I1206 05:31:32.372037 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=15.37202072 podStartE2EDuration="15.37202072s" podCreationTimestamp="2025-12-06 05:31:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:31:32.370149445 +0000 UTC m=+202.903920208" watchObservedRunningTime="2025-12-06 05:31:32.37202072 +0000 UTC m=+202.905791493" Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.368919 4958 generic.go:334] "Generic (PLEG): container finished" podID="9615d4ca-94c6-43fe-a7e2-c5a34620275f" containerID="f42cb6d682ca0db358932a38c0e89d4d6eb4926582b1f2687915a71be83f7f2b" exitCode=0 Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.369525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9615d4ca-94c6-43fe-a7e2-c5a34620275f","Type":"ContainerDied","Data":"f42cb6d682ca0db358932a38c0e89d4d6eb4926582b1f2687915a71be83f7f2b"} Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.372711 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kb98t" event={"ID":"2c09fca2-7d91-412a-9814-64370d35b3e9","Type":"ContainerStarted","Data":"68a8d992a46da1f56404e0f6fbd22fea942ee7093742ba64780f73d4332fb70b"} Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.374260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5a3d6dd5-04c5-427e-9655-052b17f8c9d2","Type":"ContainerStarted","Data":"39717396d216743a4344a2b1066b0b2514201fe836f1d64af81dd77e54459c4c"} Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.424751 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.424733434 podStartE2EDuration="11.424733434s" podCreationTimestamp="2025-12-06 05:31:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:31:33.422743545 +0000 UTC m=+203.956514318" watchObservedRunningTime="2025-12-06 05:31:33.424733434 +0000 UTC m=+203.958504197" Dec 06 05:31:33 crc kubenswrapper[4958]: I1206 05:31:33.426199 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kb98t" podStartSLOduration=184.426190464 podStartE2EDuration="3m4.426190464s" podCreationTimestamp="2025-12-06 05:28:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:31:33.404162075 +0000 UTC m=+203.937932838" watchObservedRunningTime="2025-12-06 05:31:33.426190464 +0000 UTC m=+203.959961227" Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.611791 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.687555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access\") pod \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.687627 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir\") pod \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\" (UID: \"9615d4ca-94c6-43fe-a7e2-c5a34620275f\") " Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.687754 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9615d4ca-94c6-43fe-a7e2-c5a34620275f" (UID: "9615d4ca-94c6-43fe-a7e2-c5a34620275f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.687969 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.696632 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9615d4ca-94c6-43fe-a7e2-c5a34620275f" (UID: "9615d4ca-94c6-43fe-a7e2-c5a34620275f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:31:34 crc kubenswrapper[4958]: I1206 05:31:34.789750 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9615d4ca-94c6-43fe-a7e2-c5a34620275f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:31:35 crc kubenswrapper[4958]: I1206 05:31:35.387740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"9615d4ca-94c6-43fe-a7e2-c5a34620275f","Type":"ContainerDied","Data":"3bc5c2e9110da8fcb4ef3730ade277cd72c56bab0f5f7a57461fa1046a58c0a4"} Dec 06 05:31:35 crc kubenswrapper[4958]: I1206 05:31:35.387777 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bc5c2e9110da8fcb4ef3730ade277cd72c56bab0f5f7a57461fa1046a58c0a4" Dec 06 05:31:35 crc kubenswrapper[4958]: I1206 05:31:35.387800 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 06 05:31:39 crc kubenswrapper[4958]: I1206 05:31:39.865848 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:31:39 crc kubenswrapper[4958]: I1206 05:31:39.866300 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:31:43 crc kubenswrapper[4958]: I1206 05:31:43.431413 4958 generic.go:334] "Generic (PLEG): container finished" podID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerID="41c165dfe26dc276aeba2063a1d281c485cee7248f0ee0a39398258fe4458dde" exitCode=0 Dec 06 05:31:43 crc kubenswrapper[4958]: I1206 05:31:43.431613 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerDied","Data":"41c165dfe26dc276aeba2063a1d281c485cee7248f0ee0a39398258fe4458dde"} Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.438705 4958 generic.go:334] "Generic (PLEG): container finished" podID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerID="8ce448b85cdf9b8db80dbe9cc8d4e10eb474975443b333a08575519bb06a07ee" exitCode=0 Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.438784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerDied","Data":"8ce448b85cdf9b8db80dbe9cc8d4e10eb474975443b333a08575519bb06a07ee"} Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.441584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerStarted","Data":"2ad61f11124bd576a84f7cc68c034b5853345cba634de067a10507a4da19c98c"} Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.443617 4958 generic.go:334] "Generic (PLEG): container finished" podID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerID="bdc2a3c68e9e42612a2b32594f6e8d3c51d324234f1e245fcd57638d17e6520c" exitCode=0 Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.443656 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerDied","Data":"bdc2a3c68e9e42612a2b32594f6e8d3c51d324234f1e245fcd57638d17e6520c"} Dec 06 05:31:44 crc kubenswrapper[4958]: I1206 05:31:44.475665 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gkp2b" podStartSLOduration=2.35276004 podStartE2EDuration="1m8.475646851s" podCreationTimestamp="2025-12-06 05:30:36 +0000 UTC" firstStartedPulling="2025-12-06 05:30:37.718921704 +0000 UTC m=+148.252692467" lastFinishedPulling="2025-12-06 05:31:43.841808515 +0000 UTC m=+214.375579278" observedRunningTime="2025-12-06 05:31:44.470441472 +0000 UTC m=+215.004212235" watchObservedRunningTime="2025-12-06 05:31:44.475646851 +0000 UTC m=+215.009417614" Dec 06 05:31:45 crc kubenswrapper[4958]: I1206 05:31:45.452775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerStarted","Data":"1a883d34976dcfe492b91aed6a8f814f325c0321361c59c9d5ec5f60e4121770"} Dec 06 05:31:45 crc kubenswrapper[4958]: I1206 05:31:45.454244 4958 generic.go:334] "Generic (PLEG): container finished" podID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerID="a6e1dd79f1f9fb9454ae1d700e4793f9354fc433091def3cc5e401241c41b134" exitCode=0 Dec 06 05:31:45 crc kubenswrapper[4958]: I1206 05:31:45.454288 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerDied","Data":"a6e1dd79f1f9fb9454ae1d700e4793f9354fc433091def3cc5e401241c41b134"} Dec 06 05:31:45 crc kubenswrapper[4958]: I1206 05:31:45.473404 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wcwnn" podStartSLOduration=1.958698677 podStartE2EDuration="1m9.473386441s" podCreationTimestamp="2025-12-06 05:30:36 +0000 UTC" firstStartedPulling="2025-12-06 05:30:37.708294946 +0000 UTC m=+148.242065709" lastFinishedPulling="2025-12-06 05:31:45.22298271 +0000 UTC m=+215.756753473" observedRunningTime="2025-12-06 05:31:45.472294643 +0000 UTC m=+216.006065416" watchObservedRunningTime="2025-12-06 05:31:45.473386441 +0000 UTC m=+216.007157204" Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.460565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerStarted","Data":"3853734440ebc0b4f90cee5e281a956fa4e49049214deea5dce4c42381e1666d"} Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.463858 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerStarted","Data":"9edec71c9ed4f487dce6a1aff52c88fd08149091f6d8547f7c21b83ef5a90b2f"} Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.466793 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerStarted","Data":"2f5d4e675828a5283fcfdf7b106dbdad9f5c5d3f9899941f29baf1965c196bce"} Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.468876 4958 generic.go:334] "Generic (PLEG): container finished" podID="305608d0-0aae-4507-8058-dc7837555b6c" containerID="f9ae9bec015cd768cb69c63d50d94ca1322cc9258e2d173e256d440160c468f1" exitCode=0 Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.468908 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerDied","Data":"f9ae9bec015cd768cb69c63d50d94ca1322cc9258e2d173e256d440160c468f1"} Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.500140 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zhltk" podStartSLOduration=3.346892194 podStartE2EDuration="1m10.50012111s" podCreationTimestamp="2025-12-06 05:30:36 +0000 UTC" firstStartedPulling="2025-12-06 05:30:38.783949871 +0000 UTC m=+149.317720634" lastFinishedPulling="2025-12-06 05:31:45.937178767 +0000 UTC m=+216.470949550" observedRunningTime="2025-12-06 05:31:46.497569692 +0000 UTC m=+217.031340455" watchObservedRunningTime="2025-12-06 05:31:46.50012111 +0000 UTC m=+217.033891873" Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.514315 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-ggxxn" podStartSLOduration=3.060604566 podStartE2EDuration="1m8.514296318s" podCreationTimestamp="2025-12-06 05:30:38 +0000 UTC" firstStartedPulling="2025-12-06 05:30:39.843391612 +0000 UTC m=+150.377162375" lastFinishedPulling="2025-12-06 05:31:45.297083364 +0000 UTC m=+215.830854127" observedRunningTime="2025-12-06 05:31:46.513866814 +0000 UTC m=+217.047637587" watchObservedRunningTime="2025-12-06 05:31:46.514296318 +0000 UTC m=+217.048067081" Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.598488 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.598550 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:31:46 crc kubenswrapper[4958]: I1206 05:31:46.671257 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.060984 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.061187 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.096561 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.291743 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.291830 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.476698 4958 generic.go:334] "Generic (PLEG): container finished" podID="3786f843-0226-4fdc-8511-62659463b3fb" containerID="3853734440ebc0b4f90cee5e281a956fa4e49049214deea5dce4c42381e1666d" exitCode=0 Dec 06 05:31:47 crc kubenswrapper[4958]: I1206 05:31:47.476791 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerDied","Data":"3853734440ebc0b4f90cee5e281a956fa4e49049214deea5dce4c42381e1666d"} Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.325754 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-zhltk" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="registry-server" probeResult="failure" output=< Dec 06 05:31:48 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:31:48 crc kubenswrapper[4958]: > Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.490367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerStarted","Data":"469c889dd34f9d990278e05e3b17c63fd3285c1ee7d13305bfb213ba511667b5"} Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.510725 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5f5s9" podStartSLOduration=3.522637101 podStartE2EDuration="1m9.51070184s" podCreationTimestamp="2025-12-06 05:30:39 +0000 UTC" firstStartedPulling="2025-12-06 05:30:40.87584653 +0000 UTC m=+151.409617293" lastFinishedPulling="2025-12-06 05:31:46.863911269 +0000 UTC m=+217.397682032" observedRunningTime="2025-12-06 05:31:48.508879207 +0000 UTC m=+219.042649970" watchObservedRunningTime="2025-12-06 05:31:48.51070184 +0000 UTC m=+219.044472613" Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.783716 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.783765 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:31:48 crc kubenswrapper[4958]: I1206 05:31:48.822926 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:31:50 crc kubenswrapper[4958]: I1206 05:31:50.202934 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:31:50 crc kubenswrapper[4958]: I1206 05:31:50.203212 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:31:51 crc kubenswrapper[4958]: I1206 05:31:51.252757 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5f5s9" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="registry-server" probeResult="failure" output=< Dec 06 05:31:51 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:31:51 crc kubenswrapper[4958]: > Dec 06 05:31:56 crc kubenswrapper[4958]: I1206 05:31:56.536151 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerStarted","Data":"18582cbf45603c0dcbb14fb6fc11347888f3924c6e9d53e267bf0be19a6ed306"} Dec 06 05:31:56 crc kubenswrapper[4958]: I1206 05:31:56.666110 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.106298 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.149854 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.354579 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.356186 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5hcwh"] Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.435109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.542162 4958 generic.go:334] "Generic (PLEG): container finished" podID="16ec5793-681c-4935-a298-734c214e23c8" containerID="56fcafd5104458aa7a12e82dc6e73046b5ec8b741bf50d5b65ebe3de28e6456f" exitCode=0 Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.542233 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerDied","Data":"56fcafd5104458aa7a12e82dc6e73046b5ec8b741bf50d5b65ebe3de28e6456f"} Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.544894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerStarted","Data":"babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8"} Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.548735 4958 generic.go:334] "Generic (PLEG): container finished" podID="71c8096c-9091-428a-a142-185855892fb9" containerID="18582cbf45603c0dcbb14fb6fc11347888f3924c6e9d53e267bf0be19a6ed306" exitCode=0 Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.548825 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerDied","Data":"18582cbf45603c0dcbb14fb6fc11347888f3924c6e9d53e267bf0be19a6ed306"} Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.549107 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wcwnn" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="registry-server" containerID="cri-o://1a883d34976dcfe492b91aed6a8f814f325c0321361c59c9d5ec5f60e4121770" gracePeriod=2 Dec 06 05:31:57 crc kubenswrapper[4958]: I1206 05:31:57.581992 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jrnjs" podStartSLOduration=4.866914182 podStartE2EDuration="1m18.581971134s" podCreationTimestamp="2025-12-06 05:30:39 +0000 UTC" firstStartedPulling="2025-12-06 05:30:40.853405834 +0000 UTC m=+151.387176597" lastFinishedPulling="2025-12-06 05:31:54.568462786 +0000 UTC m=+225.102233549" observedRunningTime="2025-12-06 05:31:57.578121211 +0000 UTC m=+228.111892004" watchObservedRunningTime="2025-12-06 05:31:57.581971134 +0000 UTC m=+228.115741907" Dec 06 05:31:58 crc kubenswrapper[4958]: I1206 05:31:58.556122 4958 generic.go:334] "Generic (PLEG): container finished" podID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerID="1a883d34976dcfe492b91aed6a8f814f325c0321361c59c9d5ec5f60e4121770" exitCode=0 Dec 06 05:31:58 crc kubenswrapper[4958]: I1206 05:31:58.556402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerDied","Data":"1a883d34976dcfe492b91aed6a8f814f325c0321361c59c9d5ec5f60e4121770"} Dec 06 05:31:58 crc kubenswrapper[4958]: I1206 05:31:58.836953 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.112251 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.225046 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities\") pod \"215a3047-0656-45e9-aa87-9f4987ed3b83\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.225128 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz\") pod \"215a3047-0656-45e9-aa87-9f4987ed3b83\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.225211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content\") pod \"215a3047-0656-45e9-aa87-9f4987ed3b83\" (UID: \"215a3047-0656-45e9-aa87-9f4987ed3b83\") " Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.226593 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities" (OuterVolumeSpecName: "utilities") pod "215a3047-0656-45e9-aa87-9f4987ed3b83" (UID: "215a3047-0656-45e9-aa87-9f4987ed3b83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.235739 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz" (OuterVolumeSpecName: "kube-api-access-b2dvz") pod "215a3047-0656-45e9-aa87-9f4987ed3b83" (UID: "215a3047-0656-45e9-aa87-9f4987ed3b83"). InnerVolumeSpecName "kube-api-access-b2dvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.326970 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.327017 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b2dvz\" (UniqueName: \"kubernetes.io/projected/215a3047-0656-45e9-aa87-9f4987ed3b83-kube-api-access-b2dvz\") on node \"crc\" DevicePath \"\"" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.575187 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wcwnn" event={"ID":"215a3047-0656-45e9-aa87-9f4987ed3b83","Type":"ContainerDied","Data":"306b9c681d5bfbb179f27bfc4adf49bc3fe36202e190d4fced4e69c511e67a22"} Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.575287 4958 scope.go:117] "RemoveContainer" containerID="1a883d34976dcfe492b91aed6a8f814f325c0321361c59c9d5ec5f60e4121770" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.575647 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wcwnn" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.607944 4958 scope.go:117] "RemoveContainer" containerID="8ce448b85cdf9b8db80dbe9cc8d4e10eb474975443b333a08575519bb06a07ee" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.656605 4958 scope.go:117] "RemoveContainer" containerID="3b41560435ad11e36ffe4205890b4f0487a9481518c821e651ee7faec5d303a4" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.707061 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.707357 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zhltk" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="registry-server" containerID="cri-o://9edec71c9ed4f487dce6a1aff52c88fd08149091f6d8547f7c21b83ef5a90b2f" gracePeriod=2 Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.813846 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:31:59 crc kubenswrapper[4958]: I1206 05:31:59.813902 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:32:00 crc kubenswrapper[4958]: I1206 05:32:00.262666 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:32:00 crc kubenswrapper[4958]: I1206 05:32:00.331630 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:32:00 crc kubenswrapper[4958]: I1206 05:32:00.834741 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "215a3047-0656-45e9-aa87-9f4987ed3b83" (UID: "215a3047-0656-45e9-aa87-9f4987ed3b83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:32:00 crc kubenswrapper[4958]: I1206 05:32:00.849607 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/215a3047-0656-45e9-aa87-9f4987ed3b83-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:00 crc kubenswrapper[4958]: I1206 05:32:00.863802 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jrnjs" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" probeResult="failure" output=< Dec 06 05:32:00 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:32:00 crc kubenswrapper[4958]: > Dec 06 05:32:01 crc kubenswrapper[4958]: I1206 05:32:01.117528 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:32:01 crc kubenswrapper[4958]: I1206 05:32:01.124642 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wcwnn"] Dec 06 05:32:01 crc kubenswrapper[4958]: I1206 05:32:01.592353 4958 generic.go:334] "Generic (PLEG): container finished" podID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerID="9edec71c9ed4f487dce6a1aff52c88fd08149091f6d8547f7c21b83ef5a90b2f" exitCode=0 Dec 06 05:32:01 crc kubenswrapper[4958]: I1206 05:32:01.592457 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerDied","Data":"9edec71c9ed4f487dce6a1aff52c88fd08149091f6d8547f7c21b83ef5a90b2f"} Dec 06 05:32:01 crc kubenswrapper[4958]: I1206 05:32:01.771601 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" path="/var/lib/kubelet/pods/215a3047-0656-45e9-aa87-9f4987ed3b83/volumes" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.103859 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.105611 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5f5s9" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="registry-server" containerID="cri-o://469c889dd34f9d990278e05e3b17c63fd3285c1ee7d13305bfb213ba511667b5" gracePeriod=2 Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.342679 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.369638 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content\") pod \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.369982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxqkb\" (UniqueName: \"kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb\") pod \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.370046 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities\") pod \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\" (UID: \"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59\") " Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.371816 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities" (OuterVolumeSpecName: "utilities") pod "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" (UID: "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.381690 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb" (OuterVolumeSpecName: "kube-api-access-cxqkb") pod "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" (UID: "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59"). InnerVolumeSpecName "kube-api-access-cxqkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.426292 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" (UID: "d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.471840 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.471866 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxqkb\" (UniqueName: \"kubernetes.io/projected/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-kube-api-access-cxqkb\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.471880 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.601063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zhltk" event={"ID":"d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59","Type":"ContainerDied","Data":"794a6f0f339f675a20d8d57b7b4fba7dbd4b8fd75e4ce7f5f7ba0c9b18b4445b"} Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.601164 4958 scope.go:117] "RemoveContainer" containerID="9edec71c9ed4f487dce6a1aff52c88fd08149091f6d8547f7c21b83ef5a90b2f" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.601100 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zhltk" Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.632317 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:32:02 crc kubenswrapper[4958]: I1206 05:32:02.635208 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zhltk"] Dec 06 05:32:03 crc kubenswrapper[4958]: I1206 05:32:03.773707 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" path="/var/lib/kubelet/pods/d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59/volumes" Dec 06 05:32:04 crc kubenswrapper[4958]: I1206 05:32:04.355699 4958 scope.go:117] "RemoveContainer" containerID="a6e1dd79f1f9fb9454ae1d700e4793f9354fc433091def3cc5e401241c41b134" Dec 06 05:32:04 crc kubenswrapper[4958]: I1206 05:32:04.372876 4958 scope.go:117] "RemoveContainer" containerID="c670ed7f65d6dc5dd7acb1f065fef47613602ce26e0a578a20ba5837089230bc" Dec 06 05:32:04 crc kubenswrapper[4958]: I1206 05:32:04.630715 4958 generic.go:334] "Generic (PLEG): container finished" podID="305608d0-0aae-4507-8058-dc7837555b6c" containerID="469c889dd34f9d990278e05e3b17c63fd3285c1ee7d13305bfb213ba511667b5" exitCode=0 Dec 06 05:32:04 crc kubenswrapper[4958]: I1206 05:32:04.630777 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerDied","Data":"469c889dd34f9d990278e05e3b17c63fd3285c1ee7d13305bfb213ba511667b5"} Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.311936 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.432171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content\") pod \"305608d0-0aae-4507-8058-dc7837555b6c\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.432341 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64m4t\" (UniqueName: \"kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t\") pod \"305608d0-0aae-4507-8058-dc7837555b6c\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.432378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities\") pod \"305608d0-0aae-4507-8058-dc7837555b6c\" (UID: \"305608d0-0aae-4507-8058-dc7837555b6c\") " Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.433791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities" (OuterVolumeSpecName: "utilities") pod "305608d0-0aae-4507-8058-dc7837555b6c" (UID: "305608d0-0aae-4507-8058-dc7837555b6c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.437589 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t" (OuterVolumeSpecName: "kube-api-access-64m4t") pod "305608d0-0aae-4507-8058-dc7837555b6c" (UID: "305608d0-0aae-4507-8058-dc7837555b6c"). InnerVolumeSpecName "kube-api-access-64m4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.534426 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64m4t\" (UniqueName: \"kubernetes.io/projected/305608d0-0aae-4507-8058-dc7837555b6c-kube-api-access-64m4t\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.534768 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.639435 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5f5s9" event={"ID":"305608d0-0aae-4507-8058-dc7837555b6c","Type":"ContainerDied","Data":"fce9978e746aa97d90110ec55980a640e5429567bc96136538ea0579df33504a"} Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.639522 4958 scope.go:117] "RemoveContainer" containerID="469c889dd34f9d990278e05e3b17c63fd3285c1ee7d13305bfb213ba511667b5" Dec 06 05:32:05 crc kubenswrapper[4958]: I1206 05:32:05.639598 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5f5s9" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.153905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "305608d0-0aae-4507-8058-dc7837555b6c" (UID: "305608d0-0aae-4507-8058-dc7837555b6c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.243409 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/305608d0-0aae-4507-8058-dc7837555b6c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.289498 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.296832 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5f5s9"] Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.462241 4958 scope.go:117] "RemoveContainer" containerID="f9ae9bec015cd768cb69c63d50d94ca1322cc9258e2d173e256d440160c468f1" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.498100 4958 scope.go:117] "RemoveContainer" containerID="cf82f5b57f5a99790ab0292ecbda951e3cdac6e5d6621abf8d1f184df13a4519" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.650162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerStarted","Data":"01c875a8396c94cf11bce4405b883e848f8a7ac00be10674e4062d4cf7c88dbc"} Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.679865 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qzfg8" podStartSLOduration=4.044578287 podStartE2EDuration="1m30.679840516s" podCreationTimestamp="2025-12-06 05:30:36 +0000 UTC" firstStartedPulling="2025-12-06 05:30:37.720872694 +0000 UTC m=+148.254643457" lastFinishedPulling="2025-12-06 05:32:04.356134923 +0000 UTC m=+234.889905686" observedRunningTime="2025-12-06 05:32:06.673487796 +0000 UTC m=+237.207258599" watchObservedRunningTime="2025-12-06 05:32:06.679840516 +0000 UTC m=+237.213611319" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.911867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:32:06 crc kubenswrapper[4958]: I1206 05:32:06.911960 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:32:07 crc kubenswrapper[4958]: I1206 05:32:07.662964 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerStarted","Data":"e39e141f75cfcdfd873fd4204faea3ffe57d9249ce7e574d450add8a3d2a2d78"} Dec 06 05:32:07 crc kubenswrapper[4958]: I1206 05:32:07.689733 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-t5crs" podStartSLOduration=3.034691851 podStartE2EDuration="1m29.689695103s" podCreationTimestamp="2025-12-06 05:30:38 +0000 UTC" firstStartedPulling="2025-12-06 05:30:39.843639068 +0000 UTC m=+150.377409821" lastFinishedPulling="2025-12-06 05:32:06.4986423 +0000 UTC m=+237.032413073" observedRunningTime="2025-12-06 05:32:07.68902971 +0000 UTC m=+238.222800523" watchObservedRunningTime="2025-12-06 05:32:07.689695103 +0000 UTC m=+238.223465876" Dec 06 05:32:07 crc kubenswrapper[4958]: I1206 05:32:07.770621 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="305608d0-0aae-4507-8058-dc7837555b6c" path="/var/lib/kubelet/pods/305608d0-0aae-4507-8058-dc7837555b6c/volumes" Dec 06 05:32:07 crc kubenswrapper[4958]: I1206 05:32:07.977858 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qzfg8" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="registry-server" probeResult="failure" output=< Dec 06 05:32:07 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:32:07 crc kubenswrapper[4958]: > Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.019819 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.020284 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.089329 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.866364 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.866459 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.866594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.867961 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.868195 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441" gracePeriod=600 Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.899142 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:32:09 crc kubenswrapper[4958]: I1206 05:32:09.947016 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040242 4958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040645 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040669 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040691 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040704 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040726 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040738 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040756 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040769 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040796 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040808 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040824 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040836 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="extract-content" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040865 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9615d4ca-94c6-43fe-a7e2-c5a34620275f" containerName="pruner" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040877 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9615d4ca-94c6-43fe-a7e2-c5a34620275f" containerName="pruner" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040896 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040929 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040958 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.040974 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.040998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.041012 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="extract-utilities" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.041240 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9615d4ca-94c6-43fe-a7e2-c5a34620275f" containerName="pruner" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.041265 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="215a3047-0656-45e9-aa87-9f4987ed3b83" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.041289 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="305608d0-0aae-4507-8058-dc7837555b6c" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.041319 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d68ddb6d-ffb8-42dc-b2d4-a0ec5a42db59" containerName="registry-server" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.042167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044188 4958 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044557 4958 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044677 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a" gracePeriod=15 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044786 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9" gracePeriod=15 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044833 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556" gracePeriod=15 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044868 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c" gracePeriod=15 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.044824 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a" gracePeriod=15 Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045202 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045258 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045272 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045285 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045299 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045330 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045344 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045360 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045373 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.045388 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045401 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045677 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045703 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045718 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045734 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.045755 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.116196 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199138 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199235 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.199285 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300364 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300426 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300505 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300606 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300592 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.300701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.398064 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:32:10 crc kubenswrapper[4958]: W1206 05:32:10.433539 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-b5e9c9fa3740610c88f86351f46754d8f5766f609b7f32b07e683167b5e7e46a WatchSource:0}: Error finding container b5e9c9fa3740610c88f86351f46754d8f5766f609b7f32b07e683167b5e7e46a: Status 404 returned error can't find the container with id b5e9c9fa3740610c88f86351f46754d8f5766f609b7f32b07e683167b5e7e46a Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.688277 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441" exitCode=0 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.688342 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441"} Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.691673 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"b5e9c9fa3740610c88f86351f46754d8f5766f609b7f32b07e683167b5e7e46a"} Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.695100 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.696172 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a" exitCode=0 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.696189 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9" exitCode=0 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.696197 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556" exitCode=0 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.696205 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c" exitCode=2 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.697895 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" containerID="39717396d216743a4344a2b1066b0b2514201fe836f1d64af81dd77e54459c4c" exitCode=0 Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.698166 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5a3d6dd5-04c5-427e-9655-052b17f8c9d2","Type":"ContainerDied","Data":"39717396d216743a4344a2b1066b0b2514201fe836f1d64af81dd77e54459c4c"} Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.699346 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.700106 4958 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.700702 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:10 crc kubenswrapper[4958]: E1206 05:32:10.787462 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e895bab6af795 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 05:32:10.786756501 +0000 UTC m=+241.320527284,LastTimestamp:2025-12-06 05:32:10.786756501 +0000 UTC m=+241.320527284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.923254 4958 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 06 05:32:10 crc kubenswrapper[4958]: I1206 05:32:10.923333 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.709276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"c422b9bac3611b8f30ba0208a64095d9961061db5d7922b718a23b4bbc725f67"} Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.711689 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.712206 4958 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.712814 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.713531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589"} Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.714171 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.714826 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.715423 4958 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:11 crc kubenswrapper[4958]: I1206 05:32:11.716007 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.448952 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.450013 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.450502 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.450753 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.648114 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock\") pod \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.648577 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir\") pod \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.648640 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access\") pod \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\" (UID: \"5a3d6dd5-04c5-427e-9655-052b17f8c9d2\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.650986 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock" (OuterVolumeSpecName: "var-lock") pod "5a3d6dd5-04c5-427e-9655-052b17f8c9d2" (UID: "5a3d6dd5-04c5-427e-9655-052b17f8c9d2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.651107 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5a3d6dd5-04c5-427e-9655-052b17f8c9d2" (UID: "5a3d6dd5-04c5-427e-9655-052b17f8c9d2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.659895 4958 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.659944 4958 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.686308 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5a3d6dd5-04c5-427e-9655-052b17f8c9d2" (UID: "5a3d6dd5-04c5-427e-9655-052b17f8c9d2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.720853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"5a3d6dd5-04c5-427e-9655-052b17f8c9d2","Type":"ContainerDied","Data":"4565d25d75df4d4bbc65389688a62bbeeca5f128a13b4f784063e516a5af9bfa"} Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.720920 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4565d25d75df4d4bbc65389688a62bbeeca5f128a13b4f784063e516a5af9bfa" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.721157 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.740004 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.740385 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.740771 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.761545 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5a3d6dd5-04c5-427e-9655-052b17f8c9d2-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.950872 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.952166 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.953220 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.953863 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.954434 4958 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.955112 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968228 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968386 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968443 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968505 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968830 4958 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968866 4958 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:12 crc kubenswrapper[4958]: I1206 05:32:12.968891 4958 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.738135 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.739973 4958 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a" exitCode=0 Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.740032 4958 scope.go:117] "RemoveContainer" containerID="11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.740200 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.769526 4958 scope.go:117] "RemoveContainer" containerID="9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.773683 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.773917 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.774146 4958 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.774499 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.775610 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.797030 4958 scope.go:117] "RemoveContainer" containerID="a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.816152 4958 scope.go:117] "RemoveContainer" containerID="799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.834748 4958 scope.go:117] "RemoveContainer" containerID="e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.855667 4958 scope.go:117] "RemoveContainer" containerID="4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.885882 4958 scope.go:117] "RemoveContainer" containerID="11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.886985 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\": container with ID starting with 11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a not found: ID does not exist" containerID="11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.887059 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a"} err="failed to get container status \"11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\": rpc error: code = NotFound desc = could not find container \"11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a\": container with ID starting with 11c9829c79fbd1ef57f374e8bf829162c2ae48074f4d02cc609d5f9b8791c61a not found: ID does not exist" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.887102 4958 scope.go:117] "RemoveContainer" containerID="9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.887541 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\": container with ID starting with 9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9 not found: ID does not exist" containerID="9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.887587 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9"} err="failed to get container status \"9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\": rpc error: code = NotFound desc = could not find container \"9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9\": container with ID starting with 9b2ffa013378c2ca81671e521b37d30af422fd6a418d17e7c2ca14b2da5557d9 not found: ID does not exist" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.887617 4958 scope.go:117] "RemoveContainer" containerID="a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.888060 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\": container with ID starting with a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556 not found: ID does not exist" containerID="a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.888118 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556"} err="failed to get container status \"a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\": rpc error: code = NotFound desc = could not find container \"a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556\": container with ID starting with a49f9fd54866385a2dd4c0218a2b5a828817807c2688301ea29004ba0578d556 not found: ID does not exist" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.888157 4958 scope.go:117] "RemoveContainer" containerID="799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.888736 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\": container with ID starting with 799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c not found: ID does not exist" containerID="799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.888787 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c"} err="failed to get container status \"799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\": rpc error: code = NotFound desc = could not find container \"799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c\": container with ID starting with 799383b7dc23318bd8bdb994e83afadc90864baf5e694f5a7ef2208c6771660c not found: ID does not exist" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.888819 4958 scope.go:117] "RemoveContainer" containerID="e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.890668 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\": container with ID starting with e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a not found: ID does not exist" containerID="e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.890723 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a"} err="failed to get container status \"e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\": rpc error: code = NotFound desc = could not find container \"e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a\": container with ID starting with e1ac8c174d2f2cc2b18fa67b1969e4ec3130a7a3ea2e1de302b6607334c8295a not found: ID does not exist" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.890764 4958 scope.go:117] "RemoveContainer" containerID="4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb" Dec 06 05:32:13 crc kubenswrapper[4958]: E1206 05:32:13.891282 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\": container with ID starting with 4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb not found: ID does not exist" containerID="4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb" Dec 06 05:32:13 crc kubenswrapper[4958]: I1206 05:32:13.891327 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb"} err="failed to get container status \"4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\": rpc error: code = NotFound desc = could not find container \"4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb\": container with ID starting with 4001890a8c107a9365e884bae7b1d10f11ea1208ada74b61dcd4cf3db767f3fb not found: ID does not exist" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.805439 4958 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.806178 4958 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.806876 4958 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.807408 4958 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.807990 4958 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:14 crc kubenswrapper[4958]: I1206 05:32:14.808058 4958 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 06 05:32:14 crc kubenswrapper[4958]: E1206 05:32:14.808662 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="200ms" Dec 06 05:32:15 crc kubenswrapper[4958]: E1206 05:32:15.010209 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="400ms" Dec 06 05:32:15 crc kubenswrapper[4958]: E1206 05:32:15.410604 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="800ms" Dec 06 05:32:16 crc kubenswrapper[4958]: E1206 05:32:16.213425 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="1.6s" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.956445 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.957071 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.957391 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.957823 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.958082 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.998399 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.998921 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.999211 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:16 crc kubenswrapper[4958]: I1206 05:32:16.999592 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:17 crc kubenswrapper[4958]: I1206 05:32:17.000065 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:17 crc kubenswrapper[4958]: E1206 05:32:17.209948 4958 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.20:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.187e895bab6af795 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Created,Message:Created container startup-monitor,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-06 05:32:10.786756501 +0000 UTC m=+241.320527284,LastTimestamp:2025-12-06 05:32:10.786756501 +0000 UTC m=+241.320527284,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 06 05:32:17 crc kubenswrapper[4958]: E1206 05:32:17.814270 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="3.2s" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.086663 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.087456 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.088150 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.088833 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.089657 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.090223 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.767087 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.767999 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.768823 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.769770 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:19 crc kubenswrapper[4958]: I1206 05:32:19.770579 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:21 crc kubenswrapper[4958]: E1206 05:32:21.015748 4958 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.20:6443: connect: connection refused" interval="6.4s" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.384442 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" podUID="04d501b9-571e-4c12-b963-fbe770a27710" containerName="oauth-openshift" containerID="cri-o://524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0" gracePeriod=15 Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.780549 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.781261 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.781791 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.782320 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.782873 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.783228 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.783489 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.799886 4958 generic.go:334] "Generic (PLEG): container finished" podID="04d501b9-571e-4c12-b963-fbe770a27710" containerID="524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0" exitCode=0 Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.799934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" event={"ID":"04d501b9-571e-4c12-b963-fbe770a27710","Type":"ContainerDied","Data":"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0"} Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.799967 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.799990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" event={"ID":"04d501b9-571e-4c12-b963-fbe770a27710","Type":"ContainerDied","Data":"f310a25fc22b53d160fa0bb9da3d9fe5c775679331ede9001f3782c550805af1"} Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.800012 4958 scope.go:117] "RemoveContainer" containerID="524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.800825 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.801164 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.801513 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.801792 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.802416 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.802735 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.824443 4958 scope.go:117] "RemoveContainer" containerID="524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0" Dec 06 05:32:22 crc kubenswrapper[4958]: E1206 05:32:22.824970 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0\": container with ID starting with 524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0 not found: ID does not exist" containerID="524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.825021 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0"} err="failed to get container status \"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0\": rpc error: code = NotFound desc = could not find container \"524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0\": container with ID starting with 524bc7ec7a09d4c7aacf3602a6f6c6b01e384776335e63a3f5cc36d9cc6723c0 not found: ID does not exist" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904092 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904218 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904276 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82c4f\" (UniqueName: \"kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904389 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904431 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904467 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904555 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904599 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904643 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904697 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904732 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904771 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.904825 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies\") pod \"04d501b9-571e-4c12-b963-fbe770a27710\" (UID: \"04d501b9-571e-4c12-b963-fbe770a27710\") " Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.905426 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.905535 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.905742 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.906824 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.906963 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.914253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.915242 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.915843 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.916061 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f" (OuterVolumeSpecName: "kube-api-access-82c4f") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "kube-api-access-82c4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.916781 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.917105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.917810 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.920904 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:22 crc kubenswrapper[4958]: I1206 05:32:22.924950 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "04d501b9-571e-4c12-b963-fbe770a27710" (UID: "04d501b9-571e-4c12-b963-fbe770a27710"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006365 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006614 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006648 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006667 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006687 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006705 4958 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/04d501b9-571e-4c12-b963-fbe770a27710-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006722 4958 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006743 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006760 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006777 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006796 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-82c4f\" (UniqueName: \"kubernetes.io/projected/04d501b9-571e-4c12-b963-fbe770a27710-kube-api-access-82c4f\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006813 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006832 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.006855 4958 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/04d501b9-571e-4c12-b963-fbe770a27710-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.126144 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.126803 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.127248 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.127785 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.128436 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:23 crc kubenswrapper[4958]: I1206 05:32:23.129204 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.761572 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.762738 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.763394 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.763980 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.764503 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.765221 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.765711 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.784270 4958 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.784352 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:24 crc kubenswrapper[4958]: E1206 05:32:24.785091 4958 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.785823 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:24 crc kubenswrapper[4958]: W1206 05:32:24.820329 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-8a8e54c41cecfe64249ab35a64d7829b7153d604e54c5046cf3895631a0aa20c WatchSource:0}: Error finding container 8a8e54c41cecfe64249ab35a64d7829b7153d604e54c5046cf3895631a0aa20c: Status 404 returned error can't find the container with id 8a8e54c41cecfe64249ab35a64d7829b7153d604e54c5046cf3895631a0aa20c Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.835354 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.835445 4958 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32" exitCode=1 Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.835561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32"} Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.836768 4958 scope.go:117] "RemoveContainer" containerID="dddeac64993fef3f326c165eeeb95bb4b5b9961f07f2b935e592b5aac9a51a32" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.836950 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.837563 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.838280 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.838733 4958 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.839085 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.839394 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:24 crc kubenswrapper[4958]: I1206 05:32:24.839757 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.846778 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.847252 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"5e2341402712cb0648941b3815c6e15ad07ddf967f80d1a58fe83010e97ee619"} Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.848929 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.849874 4958 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.850732 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.851240 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.851716 4958 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4cc093e5761229346fc2abbdedc62807ff16d9cff67ccabcc4778c0b34500754" exitCode=0 Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.851775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4cc093e5761229346fc2abbdedc62807ff16d9cff67ccabcc4778c0b34500754"} Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.851819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8a8e54c41cecfe64249ab35a64d7829b7153d604e54c5046cf3895631a0aa20c"} Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.852190 4958 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.852223 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.852204 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: E1206 05:32:25.852723 4958 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.853455 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.853976 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.854524 4958 status_manager.go:851] "Failed to get status for pod" podUID="71c8096c-9091-428a-a142-185855892fb9" pod="openshift-marketplace/community-operators-qzfg8" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-qzfg8\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.854968 4958 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.855565 4958 status_manager.go:851] "Failed to get status for pod" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.856301 4958 status_manager.go:851] "Failed to get status for pod" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-5ktnh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.856964 4958 status_manager.go:851] "Failed to get status for pod" podUID="04d501b9-571e-4c12-b963-fbe770a27710" pod="openshift-authentication/oauth-openshift-558db77b4-5hcwh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-5hcwh\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.857427 4958 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:25 crc kubenswrapper[4958]: I1206 05:32:25.857947 4958 status_manager.go:851] "Failed to get status for pod" podUID="16ec5793-681c-4935-a298-734c214e23c8" pod="openshift-marketplace/redhat-marketplace-t5crs" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-t5crs\": dial tcp 38.102.83.20:6443: connect: connection refused" Dec 06 05:32:26 crc kubenswrapper[4958]: I1206 05:32:26.866969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"872e3232cc415bbd41899ee6890f92ecd9dfcd6b9b3940caaa5d0770a4a9b83b"} Dec 06 05:32:26 crc kubenswrapper[4958]: I1206 05:32:26.867272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d50aa81e955a23cce7ad44068ab4af372ac3a0b7d72dba4925b2688e926e8fb5"} Dec 06 05:32:26 crc kubenswrapper[4958]: I1206 05:32:26.867284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"d27e3ea6d20fd50abc4b376306c68ed3fe8ffc2be051d4f86c08812c788add5e"} Dec 06 05:32:26 crc kubenswrapper[4958]: I1206 05:32:26.867294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ee7199c75c72bf6267fd2f9cce4058c18b96b606e11be17f276973c55bbdff2c"} Dec 06 05:32:27 crc kubenswrapper[4958]: I1206 05:32:27.875844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e679c95c7593efff61d1a818a1206727253025f0d13694b292505cbe3b48b7a0"} Dec 06 05:32:27 crc kubenswrapper[4958]: I1206 05:32:27.876682 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:27 crc kubenswrapper[4958]: I1206 05:32:27.876867 4958 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:27 crc kubenswrapper[4958]: I1206 05:32:27.876985 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:29 crc kubenswrapper[4958]: I1206 05:32:29.786914 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:29 crc kubenswrapper[4958]: I1206 05:32:29.787498 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:29 crc kubenswrapper[4958]: I1206 05:32:29.796367 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:31 crc kubenswrapper[4958]: I1206 05:32:31.993483 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:32:31 crc kubenswrapper[4958]: I1206 05:32:31.997487 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:32:32 crc kubenswrapper[4958]: I1206 05:32:32.894194 4958 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:32 crc kubenswrapper[4958]: I1206 05:32:32.906634 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.054069 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e5c73a9-42f3-4292-92e7-5c4f8ba136c9" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.913238 4958 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.914178 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.917282 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e5c73a9-42f3-4292-92e7-5c4f8ba136c9" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.919534 4958 status_manager.go:308] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://ee7199c75c72bf6267fd2f9cce4058c18b96b606e11be17f276973c55bbdff2c" Dec 06 05:32:33 crc kubenswrapper[4958]: I1206 05:32:33.919561 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:34 crc kubenswrapper[4958]: I1206 05:32:34.933803 4958 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:34 crc kubenswrapper[4958]: I1206 05:32:34.934266 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf3a5d-dfb7-4f26-a3a5-5a1198cffc78" Dec 06 05:32:34 crc kubenswrapper[4958]: I1206 05:32:34.939273 4958 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="0e5c73a9-42f3-4292-92e7-5c4f8ba136c9" Dec 06 05:32:42 crc kubenswrapper[4958]: I1206 05:32:42.169211 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 06 05:32:42 crc kubenswrapper[4958]: I1206 05:32:42.571259 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 06 05:32:42 crc kubenswrapper[4958]: I1206 05:32:42.571391 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 06 05:32:42 crc kubenswrapper[4958]: I1206 05:32:42.641249 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.058422 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.128687 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.129782 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.635012 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.785992 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.790145 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.816816 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.912062 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 06 05:32:43 crc kubenswrapper[4958]: I1206 05:32:43.991113 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.182841 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.328665 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.361671 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.371948 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.732349 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 06 05:32:44 crc kubenswrapper[4958]: I1206 05:32:44.743316 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.421770 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.467693 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.494856 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.546014 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.679833 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.706878 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 06 05:32:45 crc kubenswrapper[4958]: I1206 05:32:45.886738 4958 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.037730 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.063029 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.165178 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.239439 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.298891 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.403014 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.411302 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.527538 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.529245 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.541143 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.582736 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.665426 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.693041 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.707555 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.713353 4958 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.724509 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.749046 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.792121 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.844562 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.922030 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.959549 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 06 05:32:46 crc kubenswrapper[4958]: I1206 05:32:46.975998 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.009126 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.058957 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.059105 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.095552 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.118955 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.211358 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.289946 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.301823 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.448970 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.465986 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.498633 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.587212 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.678266 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.710026 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.736234 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.836826 4958 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.838546 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.838530128 podStartE2EDuration="37.838530128s" podCreationTimestamp="2025-12-06 05:32:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:32:33.025209977 +0000 UTC m=+263.558980740" watchObservedRunningTime="2025-12-06 05:32:47.838530128 +0000 UTC m=+278.372300901" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.841910 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-5hcwh"] Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.841960 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.848696 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.861517 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.861496518 podStartE2EDuration="15.861496518s" podCreationTimestamp="2025-12-06 05:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:32:47.861416655 +0000 UTC m=+278.395187458" watchObservedRunningTime="2025-12-06 05:32:47.861496518 +0000 UTC m=+278.395267291" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.878148 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 06 05:32:47 crc kubenswrapper[4958]: I1206 05:32:47.882100 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.051954 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.067293 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.070408 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.080993 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.114703 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.229270 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.289502 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.311645 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.347066 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.369617 4958 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.406190 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.492669 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.852776 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 06 05:32:48 crc kubenswrapper[4958]: I1206 05:32:48.909126 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.088110 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.105408 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.156770 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.164102 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.262742 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.345302 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.388974 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.407912 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.419047 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.423822 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.483381 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.497231 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.566962 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.616332 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.649412 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.651836 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.671074 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.768732 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04d501b9-571e-4c12-b963-fbe770a27710" path="/var/lib/kubelet/pods/04d501b9-571e-4c12-b963-fbe770a27710/volumes" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.802254 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.824284 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.835655 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.915084 4958 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 06 05:32:49 crc kubenswrapper[4958]: I1206 05:32:49.919686 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.061108 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.081197 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.120139 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.125575 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.130619 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.360898 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.460893 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.561957 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.561991 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.576546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.587145 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.635091 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.697822 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.812519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.836785 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.870903 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.871074 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-hgv72"] Dec 06 05:32:50 crc kubenswrapper[4958]: E1206 05:32:50.871571 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" containerName="installer" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.871624 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" containerName="installer" Dec 06 05:32:50 crc kubenswrapper[4958]: E1206 05:32:50.871657 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04d501b9-571e-4c12-b963-fbe770a27710" containerName="oauth-openshift" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.871675 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="04d501b9-571e-4c12-b963-fbe770a27710" containerName="oauth-openshift" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.871943 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3d6dd5-04c5-427e-9655-052b17f8c9d2" containerName="installer" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.871991 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="04d501b9-571e-4c12-b963-fbe770a27710" containerName="oauth-openshift" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.872819 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.879436 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.879818 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.880578 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.880759 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.880885 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.880922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881028 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881175 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881247 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881441 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881502 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.881979 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.891016 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.891370 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.900983 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.979171 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.983287 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.997075 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-dir\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.997560 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.997886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.998265 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.998646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.998912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.999199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.999552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:50 crc kubenswrapper[4958]: I1206 05:32:50.999816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt78m\" (UniqueName: \"kubernetes.io/projected/cf5da698-f252-4430-998b-b9d17d7d91d6-kube-api-access-qt78m\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.000098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.000346 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.001604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.001994 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.002303 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-policies\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.000464 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.013165 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.027886 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.027931 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.039187 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104002 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104296 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104606 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt78m\" (UniqueName: \"kubernetes.io/projected/cf5da698-f252-4430-998b-b9d17d7d91d6-kube-api-access-qt78m\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.104964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105106 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-policies\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-dir\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.106061 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105883 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.105865 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-dir\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.106408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.106660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.107125 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/cf5da698-f252-4430-998b-b9d17d7d91d6-audit-policies\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.111114 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.114762 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-error\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.114811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-login\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.114822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.116743 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.116801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.116815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.117586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.121317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/cf5da698-f252-4430-998b-b9d17d7d91d6-v4-0-config-system-session\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.137407 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt78m\" (UniqueName: \"kubernetes.io/projected/cf5da698-f252-4430-998b-b9d17d7d91d6-kube-api-access-qt78m\") pod \"oauth-openshift-76766fc778-hgv72\" (UID: \"cf5da698-f252-4430-998b-b9d17d7d91d6\") " pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.149270 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.190996 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.205893 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.250588 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.263451 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.267274 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.290014 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.406075 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.462409 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.539506 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.628556 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.641546 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.643972 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.660271 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.698304 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.740717 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.752611 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.813175 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.898307 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 06 05:32:51 crc kubenswrapper[4958]: I1206 05:32:51.969582 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.010936 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.016351 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.075833 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.273943 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.322906 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.377506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.447652 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.534580 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.553265 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.625334 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.626162 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.630108 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.771975 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 06 05:32:52 crc kubenswrapper[4958]: I1206 05:32:52.823984 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.068768 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.138029 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.159739 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.166682 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.251236 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.271855 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.404064 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.476423 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.491543 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.542655 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.592997 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.659208 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.736827 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.743956 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.811911 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.881157 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.888821 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 06 05:32:53 crc kubenswrapper[4958]: I1206 05:32:53.947039 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.007597 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.121724 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.148678 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.163036 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.228356 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.228634 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.359930 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.377243 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.502378 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.661533 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 06 05:32:54 crc kubenswrapper[4958]: I1206 05:32:54.695019 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.017934 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.085236 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.120768 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.279274 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.354126 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.377746 4958 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.378181 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://c422b9bac3611b8f30ba0208a64095d9961061db5d7922b718a23b4bbc725f67" gracePeriod=5 Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.397072 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.397254 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.478324 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.486035 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.525410 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.619182 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.718148 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.731233 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.747689 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.796966 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.838040 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.881651 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 05:32:55 crc kubenswrapper[4958]: I1206 05:32:55.937827 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.058069 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.134441 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.149948 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-hgv72"] Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.154805 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.245945 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.404373 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.530659 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-76766fc778-hgv72"] Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.537516 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.584248 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.589872 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.591489 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.751924 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 06 05:32:56 crc kubenswrapper[4958]: I1206 05:32:56.819724 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.077379 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" event={"ID":"cf5da698-f252-4430-998b-b9d17d7d91d6","Type":"ContainerStarted","Data":"489b985152d019430c75c131c31f07ad67e4b38a3ac43c30b1ac7c288d26f9bc"} Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.077513 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.145828 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.240424 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.311779 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.315131 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.498936 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.622092 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.723089 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.758378 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 06 05:32:57 crc kubenswrapper[4958]: I1206 05:32:57.927764 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.087137 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" event={"ID":"cf5da698-f252-4430-998b-b9d17d7d91d6","Type":"ContainerStarted","Data":"f151d25d346a30b30989734d35e80336effe1890fc96fd98270020857fa2b49b"} Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.087642 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.095975 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.110567 4958 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.120043 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-76766fc778-hgv72" podStartSLOduration=61.120008456 podStartE2EDuration="1m1.120008456s" podCreationTimestamp="2025-12-06 05:31:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:32:58.118603949 +0000 UTC m=+288.652374722" watchObservedRunningTime="2025-12-06 05:32:58.120008456 +0000 UTC m=+288.653779259" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.138944 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.201818 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 06 05:32:58 crc kubenswrapper[4958]: I1206 05:32:58.466624 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 06 05:32:59 crc kubenswrapper[4958]: I1206 05:32:59.183919 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 06 05:32:59 crc kubenswrapper[4958]: I1206 05:32:59.310894 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 06 05:32:59 crc kubenswrapper[4958]: I1206 05:32:59.403617 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 06 05:32:59 crc kubenswrapper[4958]: I1206 05:32:59.895768 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.104993 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.105057 4958 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="c422b9bac3611b8f30ba0208a64095d9961061db5d7922b718a23b4bbc725f67" exitCode=137 Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.585000 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.585497 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686245 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686324 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686408 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686522 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686579 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686926 4958 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686952 4958 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686970 4958 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.686992 4958 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.697810 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.774656 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.775103 4958 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.788467 4958 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.791095 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.791127 4958 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="29118993-02d0-4f4c-a1d3-f498b9aec24d" Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.798157 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 06 05:33:01 crc kubenswrapper[4958]: I1206 05:33:01.798213 4958 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="29118993-02d0-4f4c-a1d3-f498b9aec24d" Dec 06 05:33:02 crc kubenswrapper[4958]: I1206 05:33:02.116338 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 06 05:33:02 crc kubenswrapper[4958]: I1206 05:33:02.116466 4958 scope.go:117] "RemoveContainer" containerID="c422b9bac3611b8f30ba0208a64095d9961061db5d7922b718a23b4bbc725f67" Dec 06 05:33:02 crc kubenswrapper[4958]: I1206 05:33:02.116627 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 06 05:33:18 crc kubenswrapper[4958]: I1206 05:33:18.929609 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 06 05:33:19 crc kubenswrapper[4958]: I1206 05:33:19.801358 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:33:19 crc kubenswrapper[4958]: I1206 05:33:19.801626 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" podUID="ade65bd8-3f30-4239-b717-a9912ea99316" containerName="controller-manager" containerID="cri-o://aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72" gracePeriod=30 Dec 06 05:33:19 crc kubenswrapper[4958]: I1206 05:33:19.900588 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:33:19 crc kubenswrapper[4958]: I1206 05:33:19.900794 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" podUID="56e39030-25e1-4a9e-97e3-d84e988ec0da" containerName="route-controller-manager" containerID="cri-o://565b4f635ce2773c778da20fad368f56cd001fd86c4733aa1c5f7f6874b07842" gracePeriod=30 Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.702671 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.739162 4958 generic.go:334] "Generic (PLEG): container finished" podID="56e39030-25e1-4a9e-97e3-d84e988ec0da" containerID="565b4f635ce2773c778da20fad368f56cd001fd86c4733aa1c5f7f6874b07842" exitCode=0 Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.739222 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" event={"ID":"56e39030-25e1-4a9e-97e3-d84e988ec0da","Type":"ContainerDied","Data":"565b4f635ce2773c778da20fad368f56cd001fd86c4733aa1c5f7f6874b07842"} Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.741423 4958 generic.go:334] "Generic (PLEG): container finished" podID="ade65bd8-3f30-4239-b717-a9912ea99316" containerID="aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72" exitCode=0 Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.741445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" event={"ID":"ade65bd8-3f30-4239-b717-a9912ea99316","Type":"ContainerDied","Data":"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72"} Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.741481 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" event={"ID":"ade65bd8-3f30-4239-b717-a9912ea99316","Type":"ContainerDied","Data":"5abbb26e1cf9e239e674772ffbb9cb64e9d80fe698e3dd896fbc52d86e5cbbd7"} Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.741484 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-6lbg7" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.741501 4958 scope.go:117] "RemoveContainer" containerID="aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.746262 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca\") pod \"ade65bd8-3f30-4239-b717-a9912ea99316\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.746311 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles\") pod \"ade65bd8-3f30-4239-b717-a9912ea99316\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.746363 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config\") pod \"ade65bd8-3f30-4239-b717-a9912ea99316\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.746392 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert\") pod \"ade65bd8-3f30-4239-b717-a9912ea99316\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.746412 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v8phv\" (UniqueName: \"kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv\") pod \"ade65bd8-3f30-4239-b717-a9912ea99316\" (UID: \"ade65bd8-3f30-4239-b717-a9912ea99316\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.747263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ade65bd8-3f30-4239-b717-a9912ea99316" (UID: "ade65bd8-3f30-4239-b717-a9912ea99316"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.747989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config" (OuterVolumeSpecName: "config") pod "ade65bd8-3f30-4239-b717-a9912ea99316" (UID: "ade65bd8-3f30-4239-b717-a9912ea99316"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.750035 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca" (OuterVolumeSpecName: "client-ca") pod "ade65bd8-3f30-4239-b717-a9912ea99316" (UID: "ade65bd8-3f30-4239-b717-a9912ea99316"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.757763 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ade65bd8-3f30-4239-b717-a9912ea99316" (UID: "ade65bd8-3f30-4239-b717-a9912ea99316"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.757759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv" (OuterVolumeSpecName: "kube-api-access-v8phv") pod "ade65bd8-3f30-4239-b717-a9912ea99316" (UID: "ade65bd8-3f30-4239-b717-a9912ea99316"). InnerVolumeSpecName "kube-api-access-v8phv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.761367 4958 scope.go:117] "RemoveContainer" containerID="aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72" Dec 06 05:33:20 crc kubenswrapper[4958]: E1206 05:33:20.762881 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72\": container with ID starting with aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72 not found: ID does not exist" containerID="aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.762922 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72"} err="failed to get container status \"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72\": rpc error: code = NotFound desc = could not find container \"aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72\": container with ID starting with aeeecbda59305f14bfee154b02c085db0c96edeb4a9ed67e3518d4d2afd12e72 not found: ID does not exist" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.784842 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.850100 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.850143 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ade65bd8-3f30-4239-b717-a9912ea99316-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.850157 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v8phv\" (UniqueName: \"kubernetes.io/projected/ade65bd8-3f30-4239-b717-a9912ea99316-kube-api-access-v8phv\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.850172 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.850185 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ade65bd8-3f30-4239-b717-a9912ea99316-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.950974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkkf4\" (UniqueName: \"kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4\") pod \"56e39030-25e1-4a9e-97e3-d84e988ec0da\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.951060 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert\") pod \"56e39030-25e1-4a9e-97e3-d84e988ec0da\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.951101 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca\") pod \"56e39030-25e1-4a9e-97e3-d84e988ec0da\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.951135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config\") pod \"56e39030-25e1-4a9e-97e3-d84e988ec0da\" (UID: \"56e39030-25e1-4a9e-97e3-d84e988ec0da\") " Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.952252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca" (OuterVolumeSpecName: "client-ca") pod "56e39030-25e1-4a9e-97e3-d84e988ec0da" (UID: "56e39030-25e1-4a9e-97e3-d84e988ec0da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.952672 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config" (OuterVolumeSpecName: "config") pod "56e39030-25e1-4a9e-97e3-d84e988ec0da" (UID: "56e39030-25e1-4a9e-97e3-d84e988ec0da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.954655 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "56e39030-25e1-4a9e-97e3-d84e988ec0da" (UID: "56e39030-25e1-4a9e-97e3-d84e988ec0da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:20 crc kubenswrapper[4958]: I1206 05:33:20.954787 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4" (OuterVolumeSpecName: "kube-api-access-xkkf4") pod "56e39030-25e1-4a9e-97e3-d84e988ec0da" (UID: "56e39030-25e1-4a9e-97e3-d84e988ec0da"). InnerVolumeSpecName "kube-api-access-xkkf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.052943 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkkf4\" (UniqueName: \"kubernetes.io/projected/56e39030-25e1-4a9e-97e3-d84e988ec0da-kube-api-access-xkkf4\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.052988 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56e39030-25e1-4a9e-97e3-d84e988ec0da-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.053002 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.053013 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56e39030-25e1-4a9e-97e3-d84e988ec0da-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.076240 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.083929 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-6lbg7"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.709847 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:21 crc kubenswrapper[4958]: E1206 05:33:21.710356 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710390 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 05:33:21 crc kubenswrapper[4958]: E1206 05:33:21.710414 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ade65bd8-3f30-4239-b717-a9912ea99316" containerName="controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710432 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ade65bd8-3f30-4239-b717-a9912ea99316" containerName="controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: E1206 05:33:21.710461 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56e39030-25e1-4a9e-97e3-d84e988ec0da" containerName="route-controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710517 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56e39030-25e1-4a9e-97e3-d84e988ec0da" containerName="route-controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710879 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56e39030-25e1-4a9e-97e3-d84e988ec0da" containerName="route-controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710918 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ade65bd8-3f30-4239-b717-a9912ea99316" containerName="controller-manager" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.710941 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.711814 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.719212 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.720581 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.724350 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.725339 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.725807 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.726136 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.728148 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.728343 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.735192 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.743195 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.745553 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.753345 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" event={"ID":"56e39030-25e1-4a9e-97e3-d84e988ec0da","Type":"ContainerDied","Data":"2dbf2a41315ebea94715404621c956b84bcbb704e8035963742b5200de4303b5"} Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.753435 4958 scope.go:117] "RemoveContainer" containerID="565b4f635ce2773c778da20fad368f56cd001fd86c4733aa1c5f7f6874b07842" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.753656 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.774297 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ade65bd8-3f30-4239-b717-a9912ea99316" path="/var/lib/kubelet/pods/ade65bd8-3f30-4239-b717-a9912ea99316/volumes" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.802570 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.806628 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-mshkv"] Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.862527 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.862608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.862646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68fh9\" (UniqueName: \"kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.862681 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.862814 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.863046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.863150 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.863300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kllv\" (UniqueName: \"kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.863448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965316 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68fh9\" (UniqueName: \"kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965560 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kllv\" (UniqueName: \"kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965758 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965815 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.965917 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.967412 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.969102 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.969721 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.969923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.970937 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.988023 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.994901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.997739 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68fh9\" (UniqueName: \"kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9\") pod \"controller-manager-f46dd99d6-47f2d\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:21 crc kubenswrapper[4958]: I1206 05:33:21.997765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kllv\" (UniqueName: \"kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv\") pod \"route-controller-manager-7f998cd5d5-pxct4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.051940 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.065007 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.209723 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.225428 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.370913 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.438701 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:22 crc kubenswrapper[4958]: W1206 05:33:22.457244 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37ac085a_83c6_434c_bf62_bdf3911a38c4.slice/crio-06347cc02bfd4783f47e39b89fa9200d4ef6b90395308c6b8d06b336a0449e68 WatchSource:0}: Error finding container 06347cc02bfd4783f47e39b89fa9200d4ef6b90395308c6b8d06b336a0449e68: Status 404 returned error can't find the container with id 06347cc02bfd4783f47e39b89fa9200d4ef6b90395308c6b8d06b336a0449e68 Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.765667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" event={"ID":"771b577b-8965-4bd0-b90f-2bf17739323a","Type":"ContainerStarted","Data":"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2"} Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.765710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" event={"ID":"771b577b-8965-4bd0-b90f-2bf17739323a","Type":"ContainerStarted","Data":"54a54008bfd5ad4095b78537d4bfdd88678e7882b889437136d1f060a83e08bf"} Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.765744 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" podUID="771b577b-8965-4bd0-b90f-2bf17739323a" containerName="controller-manager" containerID="cri-o://b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2" gracePeriod=30 Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.765923 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.768018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" event={"ID":"37ac085a-83c6-434c-bf62-bdf3911a38c4","Type":"ContainerStarted","Data":"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f"} Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.768048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" event={"ID":"37ac085a-83c6-434c-bf62-bdf3911a38c4","Type":"ContainerStarted","Data":"06347cc02bfd4783f47e39b89fa9200d4ef6b90395308c6b8d06b336a0449e68"} Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.768139 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerName="route-controller-manager" containerID="cri-o://19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f" gracePeriod=30 Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.768197 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.780813 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.793590 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" podStartSLOduration=3.793573585 podStartE2EDuration="3.793573585s" podCreationTimestamp="2025-12-06 05:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:22.790040786 +0000 UTC m=+313.323811559" watchObservedRunningTime="2025-12-06 05:33:22.793573585 +0000 UTC m=+313.327344348" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.835637 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" podStartSLOduration=3.835620835 podStartE2EDuration="3.835620835s" podCreationTimestamp="2025-12-06 05:33:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:22.832859853 +0000 UTC m=+313.366630626" watchObservedRunningTime="2025-12-06 05:33:22.835620835 +0000 UTC m=+313.369391598" Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.884889 4958 patch_prober.go:28] interesting pod/route-controller-manager-7f998cd5d5-pxct4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:58040->10.217.0.57:8443: read: connection reset by peer" start-of-body= Dec 06 05:33:22 crc kubenswrapper[4958]: I1206 05:33:22.884937 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:58040->10.217.0.57:8443: read: connection reset by peer" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.075503 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.080294 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca\") pod \"771b577b-8965-4bd0-b90f-2bf17739323a\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.080341 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68fh9\" (UniqueName: \"kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9\") pod \"771b577b-8965-4bd0-b90f-2bf17739323a\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.080393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles\") pod \"771b577b-8965-4bd0-b90f-2bf17739323a\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.080419 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert\") pod \"771b577b-8965-4bd0-b90f-2bf17739323a\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.080463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config\") pod \"771b577b-8965-4bd0-b90f-2bf17739323a\" (UID: \"771b577b-8965-4bd0-b90f-2bf17739323a\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.081656 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config" (OuterVolumeSpecName: "config") pod "771b577b-8965-4bd0-b90f-2bf17739323a" (UID: "771b577b-8965-4bd0-b90f-2bf17739323a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.082051 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca" (OuterVolumeSpecName: "client-ca") pod "771b577b-8965-4bd0-b90f-2bf17739323a" (UID: "771b577b-8965-4bd0-b90f-2bf17739323a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.082350 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "771b577b-8965-4bd0-b90f-2bf17739323a" (UID: "771b577b-8965-4bd0-b90f-2bf17739323a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.088077 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "771b577b-8965-4bd0-b90f-2bf17739323a" (UID: "771b577b-8965-4bd0-b90f-2bf17739323a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.090629 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9" (OuterVolumeSpecName: "kube-api-access-68fh9") pod "771b577b-8965-4bd0-b90f-2bf17739323a" (UID: "771b577b-8965-4bd0-b90f-2bf17739323a"). InnerVolumeSpecName "kube-api-access-68fh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.181717 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.182012 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68fh9\" (UniqueName: \"kubernetes.io/projected/771b577b-8965-4bd0-b90f-2bf17739323a-kube-api-access-68fh9\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.182110 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.182197 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/771b577b-8965-4bd0-b90f-2bf17739323a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.182313 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/771b577b-8965-4bd0-b90f-2bf17739323a-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.210385 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f998cd5d5-pxct4_37ac085a-83c6-434c-bf62-bdf3911a38c4/route-controller-manager/0.log" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.210447 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.282904 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kllv\" (UniqueName: \"kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv\") pod \"37ac085a-83c6-434c-bf62-bdf3911a38c4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.282957 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config\") pod \"37ac085a-83c6-434c-bf62-bdf3911a38c4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.283020 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca\") pod \"37ac085a-83c6-434c-bf62-bdf3911a38c4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.283081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert\") pod \"37ac085a-83c6-434c-bf62-bdf3911a38c4\" (UID: \"37ac085a-83c6-434c-bf62-bdf3911a38c4\") " Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.283970 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config" (OuterVolumeSpecName: "config") pod "37ac085a-83c6-434c-bf62-bdf3911a38c4" (UID: "37ac085a-83c6-434c-bf62-bdf3911a38c4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.284018 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca" (OuterVolumeSpecName: "client-ca") pod "37ac085a-83c6-434c-bf62-bdf3911a38c4" (UID: "37ac085a-83c6-434c-bf62-bdf3911a38c4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.285748 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37ac085a-83c6-434c-bf62-bdf3911a38c4" (UID: "37ac085a-83c6-434c-bf62-bdf3911a38c4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.285967 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv" (OuterVolumeSpecName: "kube-api-access-5kllv") pod "37ac085a-83c6-434c-bf62-bdf3911a38c4" (UID: "37ac085a-83c6-434c-bf62-bdf3911a38c4"). InnerVolumeSpecName "kube-api-access-5kllv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.384635 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.384688 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37ac085a-83c6-434c-bf62-bdf3911a38c4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.384713 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kllv\" (UniqueName: \"kubernetes.io/projected/37ac085a-83c6-434c-bf62-bdf3911a38c4-kube-api-access-5kllv\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.384733 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37ac085a-83c6-434c-bf62-bdf3911a38c4-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.669309 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.702395 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:23 crc kubenswrapper[4958]: E1206 05:33:23.702676 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="771b577b-8965-4bd0-b90f-2bf17739323a" containerName="controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.702692 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="771b577b-8965-4bd0-b90f-2bf17739323a" containerName="controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: E1206 05:33:23.702707 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerName="route-controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.702714 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerName="route-controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.702821 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="771b577b-8965-4bd0-b90f-2bf17739323a" containerName="controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.702837 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerName="route-controller-manager" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.703258 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.713588 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.714800 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.723248 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.751018 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.776035 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56e39030-25e1-4a9e-97e3-d84e988ec0da" path="/var/lib/kubelet/pods/56e39030-25e1-4a9e-97e3-d84e988ec0da/volumes" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777604 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7f998cd5d5-pxct4_37ac085a-83c6-434c-bf62-bdf3911a38c4/route-controller-manager/0.log" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777654 4958 generic.go:334] "Generic (PLEG): container finished" podID="37ac085a-83c6-434c-bf62-bdf3911a38c4" containerID="19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f" exitCode=255 Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777725 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" event={"ID":"37ac085a-83c6-434c-bf62-bdf3911a38c4","Type":"ContainerDied","Data":"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f"} Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" event={"ID":"37ac085a-83c6-434c-bf62-bdf3911a38c4","Type":"ContainerDied","Data":"06347cc02bfd4783f47e39b89fa9200d4ef6b90395308c6b8d06b336a0449e68"} Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777791 4958 scope.go:117] "RemoveContainer" containerID="19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.777955 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.781130 4958 generic.go:334] "Generic (PLEG): container finished" podID="771b577b-8965-4bd0-b90f-2bf17739323a" containerID="b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2" exitCode=0 Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.781172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" event={"ID":"771b577b-8965-4bd0-b90f-2bf17739323a","Type":"ContainerDied","Data":"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2"} Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.781198 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" event={"ID":"771b577b-8965-4bd0-b90f-2bf17739323a","Type":"ContainerDied","Data":"54a54008bfd5ad4095b78537d4bfdd88678e7882b889437136d1f060a83e08bf"} Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.781253 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-f46dd99d6-47f2d" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.790591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.790755 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.790807 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb8c9\" (UniqueName: \"kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.790871 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmtcn\" (UniqueName: \"kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.790961 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.791029 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.791071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.791129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.791171 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.818251 4958 scope.go:117] "RemoveContainer" containerID="19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f" Dec 06 05:33:23 crc kubenswrapper[4958]: E1206 05:33:23.821051 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f\": container with ID starting with 19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f not found: ID does not exist" containerID="19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.821114 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f"} err="failed to get container status \"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f\": rpc error: code = NotFound desc = could not find container \"19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f\": container with ID starting with 19dfb2db6bfade31c732106517124fc380e448d2b07be985dc346e7aa71d387f not found: ID does not exist" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.821153 4958 scope.go:117] "RemoveContainer" containerID="b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.821298 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.823830 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-f46dd99d6-47f2d"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.836162 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.837400 4958 scope.go:117] "RemoveContainer" containerID="b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2" Dec 06 05:33:23 crc kubenswrapper[4958]: E1206 05:33:23.837863 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2\": container with ID starting with b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2 not found: ID does not exist" containerID="b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.837898 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2"} err="failed to get container status \"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2\": rpc error: code = NotFound desc = could not find container \"b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2\": container with ID starting with b9e9b549bb272b1c7af23f5836bfefd5c4154c87e115bee3200caa4b05168cd2 not found: ID does not exist" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.841750 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7f998cd5d5-pxct4"] Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.893773 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.893881 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.893987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894110 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894161 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb8c9\" (UniqueName: \"kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894233 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmtcn\" (UniqueName: \"kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894403 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.894468 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.895544 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.895810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.896686 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.899077 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.899286 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.899550 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.901291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.916319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmtcn\" (UniqueName: \"kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn\") pod \"route-controller-manager-774f84cbfb-vpfxx\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:23 crc kubenswrapper[4958]: I1206 05:33:23.923026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb8c9\" (UniqueName: \"kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9\") pod \"controller-manager-6799c5f44c-pwztl\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.045346 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.069660 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.250153 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.318247 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:33:24 crc kubenswrapper[4958]: W1206 05:33:24.329616 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda023c338_584f_4c04_9a22_dc1bf45234c2.slice/crio-32b10af34a11e5d64e2b84016ddc3bfa8d56c0adba8a2a86bbcf5a190ba37ac5 WatchSource:0}: Error finding container 32b10af34a11e5d64e2b84016ddc3bfa8d56c0adba8a2a86bbcf5a190ba37ac5: Status 404 returned error can't find the container with id 32b10af34a11e5d64e2b84016ddc3bfa8d56c0adba8a2a86bbcf5a190ba37ac5 Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.789574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" event={"ID":"a023c338-584f-4c04-9a22-dc1bf45234c2","Type":"ContainerStarted","Data":"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e"} Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.789918 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" event={"ID":"a023c338-584f-4c04-9a22-dc1bf45234c2","Type":"ContainerStarted","Data":"32b10af34a11e5d64e2b84016ddc3bfa8d56c0adba8a2a86bbcf5a190ba37ac5"} Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.789938 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.792435 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" event={"ID":"ee456f64-a8a7-4baf-950b-7ef43c4002cb","Type":"ContainerStarted","Data":"79d8791edacf41a1c6bb891ba9f742c3d06c6e5ed5ca823fa03d3706a15509ce"} Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.792526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.792546 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" event={"ID":"ee456f64-a8a7-4baf-950b-7ef43c4002cb","Type":"ContainerStarted","Data":"8955ef74c9a028405af56b91105382f1e60363bb6ae9aaf623a782e1bfb9aae1"} Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.798454 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.808894 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" podStartSLOduration=2.80887545 podStartE2EDuration="2.80887545s" podCreationTimestamp="2025-12-06 05:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:24.807308237 +0000 UTC m=+315.341079010" watchObservedRunningTime="2025-12-06 05:33:24.80887545 +0000 UTC m=+315.342646233" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.839352 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" podStartSLOduration=2.8393339109999998 podStartE2EDuration="2.839333911s" podCreationTimestamp="2025-12-06 05:33:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:24.835826463 +0000 UTC m=+315.369597266" watchObservedRunningTime="2025-12-06 05:33:24.839333911 +0000 UTC m=+315.373104674" Dec 06 05:33:24 crc kubenswrapper[4958]: I1206 05:33:24.859334 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:25 crc kubenswrapper[4958]: I1206 05:33:25.768819 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37ac085a-83c6-434c-bf62-bdf3911a38c4" path="/var/lib/kubelet/pods/37ac085a-83c6-434c-bf62-bdf3911a38c4/volumes" Dec 06 05:33:25 crc kubenswrapper[4958]: I1206 05:33:25.769375 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="771b577b-8965-4bd0-b90f-2bf17739323a" path="/var/lib/kubelet/pods/771b577b-8965-4bd0-b90f-2bf17739323a/volumes" Dec 06 05:33:37 crc kubenswrapper[4958]: I1206 05:33:37.379634 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 06 05:33:39 crc kubenswrapper[4958]: I1206 05:33:39.814391 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:39 crc kubenswrapper[4958]: I1206 05:33:39.814645 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" podUID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" containerName="route-controller-manager" containerID="cri-o://79d8791edacf41a1c6bb891ba9f742c3d06c6e5ed5ca823fa03d3706a15509ce" gracePeriod=30 Dec 06 05:33:40 crc kubenswrapper[4958]: I1206 05:33:40.886900 4958 generic.go:334] "Generic (PLEG): container finished" podID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" containerID="79d8791edacf41a1c6bb891ba9f742c3d06c6e5ed5ca823fa03d3706a15509ce" exitCode=0 Dec 06 05:33:40 crc kubenswrapper[4958]: I1206 05:33:40.887007 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" event={"ID":"ee456f64-a8a7-4baf-950b-7ef43c4002cb","Type":"ContainerDied","Data":"79d8791edacf41a1c6bb891ba9f742c3d06c6e5ed5ca823fa03d3706a15509ce"} Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.223501 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.264149 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j"] Dec 06 05:33:41 crc kubenswrapper[4958]: E1206 05:33:41.264463 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" containerName="route-controller-manager" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.264503 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" containerName="route-controller-manager" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.264653 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" containerName="route-controller-manager" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.265161 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.272350 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j"] Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.345763 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config\") pod \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.345853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca\") pod \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.345975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert\") pod \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.346019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmtcn\" (UniqueName: \"kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn\") pod \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\" (UID: \"ee456f64-a8a7-4baf-950b-7ef43c4002cb\") " Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.346900 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config" (OuterVolumeSpecName: "config") pod "ee456f64-a8a7-4baf-950b-7ef43c4002cb" (UID: "ee456f64-a8a7-4baf-950b-7ef43c4002cb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.346961 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca" (OuterVolumeSpecName: "client-ca") pod "ee456f64-a8a7-4baf-950b-7ef43c4002cb" (UID: "ee456f64-a8a7-4baf-950b-7ef43c4002cb"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.351874 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ee456f64-a8a7-4baf-950b-7ef43c4002cb" (UID: "ee456f64-a8a7-4baf-950b-7ef43c4002cb"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.353072 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn" (OuterVolumeSpecName: "kube-api-access-zmtcn") pod "ee456f64-a8a7-4baf-950b-7ef43c4002cb" (UID: "ee456f64-a8a7-4baf-950b-7ef43c4002cb"). InnerVolumeSpecName "kube-api-access-zmtcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.447722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc23f15d-7c14-42e8-91d3-094430d65769-serving-cert\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.447770 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrgm\" (UniqueName: \"kubernetes.io/projected/dc23f15d-7c14-42e8-91d3-094430d65769-kube-api-access-rbrgm\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.447883 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-client-ca\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.447943 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-config\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.448000 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.448014 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ee456f64-a8a7-4baf-950b-7ef43c4002cb-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.448026 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ee456f64-a8a7-4baf-950b-7ef43c4002cb-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.448038 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmtcn\" (UniqueName: \"kubernetes.io/projected/ee456f64-a8a7-4baf-950b-7ef43c4002cb-kube-api-access-zmtcn\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.549741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-client-ca\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.549872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-config\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.549925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc23f15d-7c14-42e8-91d3-094430d65769-serving-cert\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.549957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrgm\" (UniqueName: \"kubernetes.io/projected/dc23f15d-7c14-42e8-91d3-094430d65769-kube-api-access-rbrgm\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.551127 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-client-ca\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.552552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dc23f15d-7c14-42e8-91d3-094430d65769-config\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.556863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc23f15d-7c14-42e8-91d3-094430d65769-serving-cert\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.577969 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrgm\" (UniqueName: \"kubernetes.io/projected/dc23f15d-7c14-42e8-91d3-094430d65769-kube-api-access-rbrgm\") pod \"route-controller-manager-659cc6f778-8fq2j\" (UID: \"dc23f15d-7c14-42e8-91d3-094430d65769\") " pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.580150 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.906115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" event={"ID":"ee456f64-a8a7-4baf-950b-7ef43c4002cb","Type":"ContainerDied","Data":"8955ef74c9a028405af56b91105382f1e60363bb6ae9aaf623a782e1bfb9aae1"} Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.906712 4958 scope.go:117] "RemoveContainer" containerID="79d8791edacf41a1c6bb891ba9f742c3d06c6e5ed5ca823fa03d3706a15509ce" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.907560 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx" Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.944888 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:41 crc kubenswrapper[4958]: I1206 05:33:41.952452 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-774f84cbfb-vpfxx"] Dec 06 05:33:42 crc kubenswrapper[4958]: I1206 05:33:42.099856 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j"] Dec 06 05:33:42 crc kubenswrapper[4958]: I1206 05:33:42.914043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" event={"ID":"dc23f15d-7c14-42e8-91d3-094430d65769","Type":"ContainerStarted","Data":"3cd0f61a62d7d5e61a1a613c940d61862cca698a13929714052a7bdc3d8f8c06"} Dec 06 05:33:42 crc kubenswrapper[4958]: I1206 05:33:42.914379 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:42 crc kubenswrapper[4958]: I1206 05:33:42.914400 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" event={"ID":"dc23f15d-7c14-42e8-91d3-094430d65769","Type":"ContainerStarted","Data":"8c5b669e4aa4ca56e7a469acceba47e527c115175322ee7273b2ded91455113b"} Dec 06 05:33:43 crc kubenswrapper[4958]: I1206 05:33:43.341197 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" Dec 06 05:33:43 crc kubenswrapper[4958]: I1206 05:33:43.356673 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-659cc6f778-8fq2j" podStartSLOduration=4.356655114 podStartE2EDuration="4.356655114s" podCreationTimestamp="2025-12-06 05:33:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:42.942903583 +0000 UTC m=+333.476674346" watchObservedRunningTime="2025-12-06 05:33:43.356655114 +0000 UTC m=+333.890425897" Dec 06 05:33:43 crc kubenswrapper[4958]: I1206 05:33:43.769925 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee456f64-a8a7-4baf-950b-7ef43c4002cb" path="/var/lib/kubelet/pods/ee456f64-a8a7-4baf-950b-7ef43c4002cb/volumes" Dec 06 05:33:46 crc kubenswrapper[4958]: I1206 05:33:46.549110 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:33:46 crc kubenswrapper[4958]: I1206 05:33:46.550723 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-t5crs" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="registry-server" containerID="cri-o://e39e141f75cfcdfd873fd4204faea3ffe57d9249ce7e574d450add8a3d2a2d78" gracePeriod=2 Dec 06 05:33:47 crc kubenswrapper[4958]: I1206 05:33:47.945501 4958 generic.go:334] "Generic (PLEG): container finished" podID="16ec5793-681c-4935-a298-734c214e23c8" containerID="e39e141f75cfcdfd873fd4204faea3ffe57d9249ce7e574d450add8a3d2a2d78" exitCode=0 Dec 06 05:33:47 crc kubenswrapper[4958]: I1206 05:33:47.945565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerDied","Data":"e39e141f75cfcdfd873fd4204faea3ffe57d9249ce7e574d450add8a3d2a2d78"} Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.227182 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.344893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content\") pod \"16ec5793-681c-4935-a298-734c214e23c8\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.344958 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities\") pod \"16ec5793-681c-4935-a298-734c214e23c8\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.345006 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhllh\" (UniqueName: \"kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh\") pod \"16ec5793-681c-4935-a298-734c214e23c8\" (UID: \"16ec5793-681c-4935-a298-734c214e23c8\") " Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.345968 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities" (OuterVolumeSpecName: "utilities") pod "16ec5793-681c-4935-a298-734c214e23c8" (UID: "16ec5793-681c-4935-a298-734c214e23c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.352070 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh" (OuterVolumeSpecName: "kube-api-access-bhllh") pod "16ec5793-681c-4935-a298-734c214e23c8" (UID: "16ec5793-681c-4935-a298-734c214e23c8"). InnerVolumeSpecName "kube-api-access-bhllh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.364559 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16ec5793-681c-4935-a298-734c214e23c8" (UID: "16ec5793-681c-4935-a298-734c214e23c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.446018 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.446047 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16ec5793-681c-4935-a298-734c214e23c8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.446056 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhllh\" (UniqueName: \"kubernetes.io/projected/16ec5793-681c-4935-a298-734c214e23c8-kube-api-access-bhllh\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.953489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-t5crs" event={"ID":"16ec5793-681c-4935-a298-734c214e23c8","Type":"ContainerDied","Data":"9547d0c36844c903fe65685f02b5973cabb71815557953908f0325f551653146"} Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.953628 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-t5crs" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.954031 4958 scope.go:117] "RemoveContainer" containerID="e39e141f75cfcdfd873fd4204faea3ffe57d9249ce7e574d450add8a3d2a2d78" Dec 06 05:33:48 crc kubenswrapper[4958]: I1206 05:33:48.979892 4958 scope.go:117] "RemoveContainer" containerID="56fcafd5104458aa7a12e82dc6e73046b5ec8b741bf50d5b65ebe3de28e6456f" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:48.999956 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.003588 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-t5crs"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.010421 4958 scope.go:117] "RemoveContainer" containerID="c04f2b7285ec390fa09ca001b464301e0c955e9ae63426ead7da10e23673b455" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.672531 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.673013 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gkp2b" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="registry-server" containerID="cri-o://2ad61f11124bd576a84f7cc68c034b5853345cba634de067a10507a4da19c98c" gracePeriod=30 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.678292 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.678733 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qzfg8" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="registry-server" containerID="cri-o://01c875a8396c94cf11bce4405b883e848f8a7ac00be10674e4062d4cf7c88dbc" gracePeriod=30 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.686431 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.686987 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerName="marketplace-operator" containerID="cri-o://0050edb44eb49ee3b5a06ce4923bf3be16d1ca8fe82249102962b3bb1ad293bc" gracePeriod=30 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.691800 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.692070 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-ggxxn" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="registry-server" containerID="cri-o://2f5d4e675828a5283fcfdf7b106dbdad9f5c5d3f9899941f29baf1965c196bce" gracePeriod=30 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.698126 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.698592 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jrnjs" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" containerID="cri-o://babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" gracePeriod=30 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.705054 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.705363 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="extract-content" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.705379 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="extract-content" Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.705402 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="extract-utilities" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.705411 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="extract-utilities" Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.705435 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="registry-server" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.705443 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="registry-server" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.705601 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="16ec5793-681c-4935-a298-734c214e23c8" containerName="registry-server" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.708004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.708683 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.769120 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16ec5793-681c-4935-a298-734c214e23c8" path="/var/lib/kubelet/pods/16ec5793-681c-4935-a298-734c214e23c8/volumes" Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.815311 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8 is running failed: container process not found" containerID="babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.815997 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8 is running failed: container process not found" containerID="babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.816405 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8 is running failed: container process not found" containerID="babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:33:49 crc kubenswrapper[4958]: E1206 05:33:49.816533 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-jrnjs" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.868782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.868912 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.868952 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55dbw\" (UniqueName: \"kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.965017 4958 generic.go:334] "Generic (PLEG): container finished" podID="3786f843-0226-4fdc-8511-62659463b3fb" containerID="babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" exitCode=0 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.965101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerDied","Data":"babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8"} Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.968866 4958 generic.go:334] "Generic (PLEG): container finished" podID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerID="2ad61f11124bd576a84f7cc68c034b5853345cba634de067a10507a4da19c98c" exitCode=0 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.968933 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerDied","Data":"2ad61f11124bd576a84f7cc68c034b5853345cba634de067a10507a4da19c98c"} Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.969951 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.970001 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.970025 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55dbw\" (UniqueName: \"kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.973955 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.974576 4958 generic.go:334] "Generic (PLEG): container finished" podID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerID="2f5d4e675828a5283fcfdf7b106dbdad9f5c5d3f9899941f29baf1965c196bce" exitCode=0 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.974610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerDied","Data":"2f5d4e675828a5283fcfdf7b106dbdad9f5c5d3f9899941f29baf1965c196bce"} Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.978531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.980883 4958 generic.go:334] "Generic (PLEG): container finished" podID="71c8096c-9091-428a-a142-185855892fb9" containerID="01c875a8396c94cf11bce4405b883e848f8a7ac00be10674e4062d4cf7c88dbc" exitCode=0 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.980954 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerDied","Data":"01c875a8396c94cf11bce4405b883e848f8a7ac00be10674e4062d4cf7c88dbc"} Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.982775 4958 generic.go:334] "Generic (PLEG): container finished" podID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerID="0050edb44eb49ee3b5a06ce4923bf3be16d1ca8fe82249102962b3bb1ad293bc" exitCode=0 Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.982836 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" event={"ID":"05fda290-e73b-468e-b494-6cd912e3cbd8","Type":"ContainerDied","Data":"0050edb44eb49ee3b5a06ce4923bf3be16d1ca8fe82249102962b3bb1ad293bc"} Dec 06 05:33:49 crc kubenswrapper[4958]: I1206 05:33:49.990177 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55dbw\" (UniqueName: \"kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw\") pod \"marketplace-operator-79b997595-4l8c2\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.041726 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.491831 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.862953 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.889292 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6qtp\" (UniqueName: \"kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp\") pod \"71c8096c-9091-428a-a142-185855892fb9\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.889364 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities\") pod \"71c8096c-9091-428a-a142-185855892fb9\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.889425 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content\") pod \"71c8096c-9091-428a-a142-185855892fb9\" (UID: \"71c8096c-9091-428a-a142-185855892fb9\") " Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.890742 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities" (OuterVolumeSpecName: "utilities") pod "71c8096c-9091-428a-a142-185855892fb9" (UID: "71c8096c-9091-428a-a142-185855892fb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.897016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp" (OuterVolumeSpecName: "kube-api-access-b6qtp") pod "71c8096c-9091-428a-a142-185855892fb9" (UID: "71c8096c-9091-428a-a142-185855892fb9"). InnerVolumeSpecName "kube-api-access-b6qtp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.942253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8096c-9091-428a-a142-185855892fb9" (UID: "71c8096c-9091-428a-a142-185855892fb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.964388 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.967952 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.973897 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.976050 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.990812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-ggxxn" event={"ID":"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a","Type":"ContainerDied","Data":"60a65314bd573661c2c4ba44c56bb074ab8b48e46655d8566844f2c83e008e35"} Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.990862 4958 scope.go:117] "RemoveContainer" containerID="2f5d4e675828a5283fcfdf7b106dbdad9f5c5d3f9899941f29baf1965c196bce" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.990986 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-ggxxn" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.991010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6zz7\" (UniqueName: \"kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7\") pod \"3786f843-0226-4fdc-8511-62659463b3fb\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.991988 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities" (OuterVolumeSpecName: "utilities") pod "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" (UID: "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.992258 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities\") pod \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.993202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gkp2b" event={"ID":"192a9281-a2ae-4251-aaf2-8d1f67d0321c","Type":"ContainerDied","Data":"6db3c7dba0493262550bf8db53336fb292a846532e5d70dbb484e6d236d12a64"} Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.993234 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gkp2b" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.994578 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7" (OuterVolumeSpecName: "kube-api-access-j6zz7") pod "3786f843-0226-4fdc-8511-62659463b3fb" (UID: "3786f843-0226-4fdc-8511-62659463b3fb"). InnerVolumeSpecName "kube-api-access-j6zz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.996106 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qzfg8" event={"ID":"71c8096c-9091-428a-a142-185855892fb9","Type":"ContainerDied","Data":"946250caac0d46e0fdcbb0a6d7688a73211a81c4b60ae344b79ab08e0e7bdc01"} Dec 06 05:33:50 crc kubenswrapper[4958]: I1206 05:33:50.996225 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qzfg8" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.000955 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content\") pod \"3786f843-0226-4fdc-8511-62659463b3fb\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics\") pod \"05fda290-e73b-468e-b494-6cd912e3cbd8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001093 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities\") pod \"3786f843-0226-4fdc-8511-62659463b3fb\" (UID: \"3786f843-0226-4fdc-8511-62659463b3fb\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001232 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities\") pod \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001841 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities" (OuterVolumeSpecName: "utilities") pod "3786f843-0226-4fdc-8511-62659463b3fb" (UID: "3786f843-0226-4fdc-8511-62659463b3fb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content\") pod \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.001940 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content\") pod \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.003262 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities" (OuterVolumeSpecName: "utilities") pod "192a9281-a2ae-4251-aaf2-8d1f67d0321c" (UID: "192a9281-a2ae-4251-aaf2-8d1f67d0321c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.004195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "05fda290-e73b-468e-b494-6cd912e3cbd8" (UID: "05fda290-e73b-468e-b494-6cd912e3cbd8"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.011750 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.011746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-w8bt8" event={"ID":"05fda290-e73b-468e-b494-6cd912e3cbd8","Type":"ContainerDied","Data":"fc06a7b6567207c4b6186340e546615f3f272820b9f3150cb0da40e883f59c01"} Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.015036 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" event={"ID":"176e1833-ad7f-40a8-8179-846a546a6fad","Type":"ContainerStarted","Data":"2fabc7d166650c6c84cee6b9a8a9314b178b87c1d83b580a9b9ed91af392d938"} Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.015071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" event={"ID":"176e1833-ad7f-40a8-8179-846a546a6fad","Type":"ContainerStarted","Data":"7cc16b7ef75df1a3391f1943b238ba1965ea499f512d877d35ec8ffa0a668896"} Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.015232 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.016754 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jrnjs" event={"ID":"3786f843-0226-4fdc-8511-62659463b3fb","Type":"ContainerDied","Data":"a09d11ee85ce2eae3703fcc61e164931ac19a604e96ab873ee288b195557599f"} Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.016773 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-4l8c2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" start-of-body= Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.016835 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jrnjs" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.016829 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.63:8080/healthz\": dial tcp 10.217.0.63:8080: connect: connection refused" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.023593 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca\") pod \"05fda290-e73b-468e-b494-6cd912e3cbd8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.023648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnxp8\" (UniqueName: \"kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8\") pod \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\" (UID: \"97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.023701 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7jrl\" (UniqueName: \"kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl\") pod \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\" (UID: \"192a9281-a2ae-4251-aaf2-8d1f67d0321c\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.023729 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9gzqg\" (UniqueName: \"kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg\") pod \"05fda290-e73b-468e-b494-6cd912e3cbd8\" (UID: \"05fda290-e73b-468e-b494-6cd912e3cbd8\") " Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.023993 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024011 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024022 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024032 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6qtp\" (UniqueName: \"kubernetes.io/projected/71c8096c-9091-428a-a142-185855892fb9-kube-api-access-b6qtp\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024043 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024051 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6zz7\" (UniqueName: \"kubernetes.io/projected/3786f843-0226-4fdc-8511-62659463b3fb-kube-api-access-j6zz7\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024059 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.024068 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8096c-9091-428a-a142-185855892fb9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.026139 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "05fda290-e73b-468e-b494-6cd912e3cbd8" (UID: "05fda290-e73b-468e-b494-6cd912e3cbd8"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.027781 4958 scope.go:117] "RemoveContainer" containerID="bdc2a3c68e9e42612a2b32594f6e8d3c51d324234f1e245fcd57638d17e6520c" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.028580 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" (UID: "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.029406 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8" (OuterVolumeSpecName: "kube-api-access-gnxp8") pod "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" (UID: "97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a"). InnerVolumeSpecName "kube-api-access-gnxp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.032209 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl" (OuterVolumeSpecName: "kube-api-access-q7jrl") pod "192a9281-a2ae-4251-aaf2-8d1f67d0321c" (UID: "192a9281-a2ae-4251-aaf2-8d1f67d0321c"). InnerVolumeSpecName "kube-api-access-q7jrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.034085 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg" (OuterVolumeSpecName: "kube-api-access-9gzqg") pod "05fda290-e73b-468e-b494-6cd912e3cbd8" (UID: "05fda290-e73b-468e-b494-6cd912e3cbd8"). InnerVolumeSpecName "kube-api-access-9gzqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.059943 4958 scope.go:117] "RemoveContainer" containerID="632266ab8baacf57971acf5250bcb24dd2c7b86daef3d52b02f171d85a45fbcf" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.085921 4958 scope.go:117] "RemoveContainer" containerID="2ad61f11124bd576a84f7cc68c034b5853345cba634de067a10507a4da19c98c" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.100993 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" podStartSLOduration=2.100973984 podStartE2EDuration="2.100973984s" podCreationTimestamp="2025-12-06 05:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:33:51.085598203 +0000 UTC m=+341.619368976" watchObservedRunningTime="2025-12-06 05:33:51.100973984 +0000 UTC m=+341.634744747" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.107034 4958 scope.go:117] "RemoveContainer" containerID="41c165dfe26dc276aeba2063a1d281c485cee7248f0ee0a39398258fe4458dde" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.112062 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.122344 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qzfg8"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.123964 4958 scope.go:117] "RemoveContainer" containerID="926d856043a7f9a145c6634917017def3d9089bd665840dcafd43a5dd388dc93" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.124914 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.124940 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/05fda290-e73b-468e-b494-6cd912e3cbd8-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.124954 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnxp8\" (UniqueName: \"kubernetes.io/projected/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a-kube-api-access-gnxp8\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.124966 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7jrl\" (UniqueName: \"kubernetes.io/projected/192a9281-a2ae-4251-aaf2-8d1f67d0321c-kube-api-access-q7jrl\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.124979 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9gzqg\" (UniqueName: \"kubernetes.io/projected/05fda290-e73b-468e-b494-6cd912e3cbd8-kube-api-access-9gzqg\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.143322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3786f843-0226-4fdc-8511-62659463b3fb" (UID: "3786f843-0226-4fdc-8511-62659463b3fb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.145697 4958 scope.go:117] "RemoveContainer" containerID="01c875a8396c94cf11bce4405b883e848f8a7ac00be10674e4062d4cf7c88dbc" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.150050 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "192a9281-a2ae-4251-aaf2-8d1f67d0321c" (UID: "192a9281-a2ae-4251-aaf2-8d1f67d0321c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.159218 4958 scope.go:117] "RemoveContainer" containerID="18582cbf45603c0dcbb14fb6fc11347888f3924c6e9d53e267bf0be19a6ed306" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.174009 4958 scope.go:117] "RemoveContainer" containerID="ef1f4a3f095b158659e5795d961e45acda517a1d97c59be068ab195640db00a0" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.190421 4958 scope.go:117] "RemoveContainer" containerID="0050edb44eb49ee3b5a06ce4923bf3be16d1ca8fe82249102962b3bb1ad293bc" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.212594 4958 scope.go:117] "RemoveContainer" containerID="babaf338e69f2cde91dcf9450dabc498aa7a6160dcfeaedbf62e2029cef885e8" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.226024 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3786f843-0226-4fdc-8511-62659463b3fb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.226046 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/192a9281-a2ae-4251-aaf2-8d1f67d0321c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.227080 4958 scope.go:117] "RemoveContainer" containerID="3853734440ebc0b4f90cee5e281a956fa4e49049214deea5dce4c42381e1666d" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.245579 4958 scope.go:117] "RemoveContainer" containerID="d7a1f3dd79b691266ada54ad3285279e438da7ffb866f435a0d3f8a463f98d1a" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.322906 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.328447 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gkp2b"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.333323 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.337411 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-ggxxn"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.342707 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.357361 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-w8bt8"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.358045 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.365977 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jrnjs"] Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.773091 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" path="/var/lib/kubelet/pods/05fda290-e73b-468e-b494-6cd912e3cbd8/volumes" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.774122 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" path="/var/lib/kubelet/pods/192a9281-a2ae-4251-aaf2-8d1f67d0321c/volumes" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.775796 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3786f843-0226-4fdc-8511-62659463b3fb" path="/var/lib/kubelet/pods/3786f843-0226-4fdc-8511-62659463b3fb/volumes" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.778549 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8096c-9091-428a-a142-185855892fb9" path="/var/lib/kubelet/pods/71c8096c-9091-428a-a142-185855892fb9/volumes" Dec 06 05:33:51 crc kubenswrapper[4958]: I1206 05:33:51.780194 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" path="/var/lib/kubelet/pods/97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a/volumes" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.036097 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.761882 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762126 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762140 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762155 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762164 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762174 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762183 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762193 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762201 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762214 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerName="marketplace-operator" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762222 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerName="marketplace-operator" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762232 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762240 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762252 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762260 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="extract-utilities" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762274 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762282 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762292 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762300 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762313 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762320 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762332 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762339 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762352 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762360 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="extract-content" Dec 06 05:33:52 crc kubenswrapper[4958]: E1206 05:33:52.762591 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762604 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762715 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="97d8e3a5-e7e8-46c8-8346-1fddba1a0b6a" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762726 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3786f843-0226-4fdc-8511-62659463b3fb" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762738 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="05fda290-e73b-468e-b494-6cd912e3cbd8" containerName="marketplace-operator" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762750 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="192a9281-a2ae-4251-aaf2-8d1f67d0321c" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.762769 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="71c8096c-9091-428a-a142-185855892fb9" containerName="registry-server" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.763592 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.766670 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.770458 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.844953 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrls\" (UniqueName: \"kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.845008 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.845069 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.945866 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.945953 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrls\" (UniqueName: \"kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.945979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.946349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.946598 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.949786 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.951254 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.955432 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.958434 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:33:52 crc kubenswrapper[4958]: I1206 05:33:52.969288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrls\" (UniqueName: \"kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls\") pod \"certified-operators-cmpmq\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.047296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.047348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8s25\" (UniqueName: \"kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.047391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.093399 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.148528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8s25\" (UniqueName: \"kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.149306 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.149421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.149831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.150031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.171222 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8s25\" (UniqueName: \"kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25\") pod \"community-operators-6rr8p\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.309440 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.577899 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:33:53 crc kubenswrapper[4958]: I1206 05:33:53.717712 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:33:53 crc kubenswrapper[4958]: W1206 05:33:53.782854 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc35562f3_9b8d_407a_9e1f_d74a17561858.slice/crio-b3c4527cab9fdeda3594cfa55014678bc4068c5f932d6f66ef310ffcdfc83d88 WatchSource:0}: Error finding container b3c4527cab9fdeda3594cfa55014678bc4068c5f932d6f66ef310ffcdfc83d88: Status 404 returned error can't find the container with id b3c4527cab9fdeda3594cfa55014678bc4068c5f932d6f66ef310ffcdfc83d88 Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.044192 4958 generic.go:334] "Generic (PLEG): container finished" podID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerID="596dbb004cbf8402ed7fa9c7e9d1da4579e35fbcb93237b981537fb7d7059d4e" exitCode=0 Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.044313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerDied","Data":"596dbb004cbf8402ed7fa9c7e9d1da4579e35fbcb93237b981537fb7d7059d4e"} Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.044346 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerStarted","Data":"d5a0ff0db92b8ae444fedaaf14fdda09d533243638fe30d341995ed124e123da"} Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.046075 4958 generic.go:334] "Generic (PLEG): container finished" podID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerID="47add315a20787fd20003f5b9636a14abbe4d343a2924183c1480a39f5b46679" exitCode=0 Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.046453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerDied","Data":"47add315a20787fd20003f5b9636a14abbe4d343a2924183c1480a39f5b46679"} Dec 06 05:33:54 crc kubenswrapper[4958]: I1206 05:33:54.046503 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerStarted","Data":"b3c4527cab9fdeda3594cfa55014678bc4068c5f932d6f66ef310ffcdfc83d88"} Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.053129 4958 generic.go:334] "Generic (PLEG): container finished" podID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerID="2d5fbb13c46e3edb8cbf4606d74a8d1d9c37962d583145b6f71c2048fa64d6c1" exitCode=0 Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.053190 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerDied","Data":"2d5fbb13c46e3edb8cbf4606d74a8d1d9c37962d583145b6f71c2048fa64d6c1"} Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.055079 4958 generic.go:334] "Generic (PLEG): container finished" podID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerID="932d625d90d81751a9e0ce72126db56df715366c2296dd8586ea3344b1545aaa" exitCode=0 Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.055115 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerDied","Data":"932d625d90d81751a9e0ce72126db56df715366c2296dd8586ea3344b1545aaa"} Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.151312 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.152683 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.154092 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.163801 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.277183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.277246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.277567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.349655 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.350683 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.353363 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.361732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.378623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.378699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.378751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.379158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.379168 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.399830 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz\") pod \"redhat-operators-q2qdm\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.471952 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.480614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.480695 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.480865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kw4qv\" (UniqueName: \"kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.592710 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.593095 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kw4qv\" (UniqueName: \"kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.593160 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.593728 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.594245 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.614088 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kw4qv\" (UniqueName: \"kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv\") pod \"redhat-marketplace-2997j\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.680446 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:33:55 crc kubenswrapper[4958]: I1206 05:33:55.893732 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.062681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerStarted","Data":"2aac1c9125d944ed8a33154ede921d66e68daf809e990e27176f483eb75cae20"} Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.068224 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerStarted","Data":"7fdffaea75e159fc72cf54d4df5d2f195817b6d725dede699b8c3e70b5f08919"} Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.069987 4958 generic.go:334] "Generic (PLEG): container finished" podID="17574228-062b-4c78-a16f-3e8a616e9a37" containerID="4c5e4699e10587143850e321eb64306e828bac64a3b23ee27ee21f2bf6a4737c" exitCode=0 Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.070037 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerDied","Data":"4c5e4699e10587143850e321eb64306e828bac64a3b23ee27ee21f2bf6a4737c"} Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.070063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerStarted","Data":"d81ce8dfe5b72047210cb2929833b463387781bdf557d9d3a6e932bca610cd00"} Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.084169 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cmpmq" podStartSLOduration=2.49816969 podStartE2EDuration="4.084150404s" podCreationTimestamp="2025-12-06 05:33:52 +0000 UTC" firstStartedPulling="2025-12-06 05:33:54.046652074 +0000 UTC m=+344.580422837" lastFinishedPulling="2025-12-06 05:33:55.632632788 +0000 UTC m=+346.166403551" observedRunningTime="2025-12-06 05:33:56.082962799 +0000 UTC m=+346.616733582" watchObservedRunningTime="2025-12-06 05:33:56.084150404 +0000 UTC m=+346.617921157" Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.092443 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:33:56 crc kubenswrapper[4958]: W1206 05:33:56.100562 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26b187af_f0da_435e_9b8e_89086658d5b1.slice/crio-07b851065dc99b10ece293123450ba67860fd7bcf4715d931676b19f42e81bbb WatchSource:0}: Error finding container 07b851065dc99b10ece293123450ba67860fd7bcf4715d931676b19f42e81bbb: Status 404 returned error can't find the container with id 07b851065dc99b10ece293123450ba67860fd7bcf4715d931676b19f42e81bbb Dec 06 05:33:56 crc kubenswrapper[4958]: I1206 05:33:56.100797 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6rr8p" podStartSLOduration=2.684299844 podStartE2EDuration="4.100782931s" podCreationTimestamp="2025-12-06 05:33:52 +0000 UTC" firstStartedPulling="2025-12-06 05:33:54.048035324 +0000 UTC m=+344.581806107" lastFinishedPulling="2025-12-06 05:33:55.464518431 +0000 UTC m=+345.998289194" observedRunningTime="2025-12-06 05:33:56.09969323 +0000 UTC m=+346.633464003" watchObservedRunningTime="2025-12-06 05:33:56.100782931 +0000 UTC m=+346.634553704" Dec 06 05:33:57 crc kubenswrapper[4958]: I1206 05:33:57.076854 4958 generic.go:334] "Generic (PLEG): container finished" podID="26b187af-f0da-435e-9b8e-89086658d5b1" containerID="5c1a51d1a4745d924329cbe9adf744a91196a57d3755fc44e6d0d4bf5b03ee69" exitCode=0 Dec 06 05:33:57 crc kubenswrapper[4958]: I1206 05:33:57.076913 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerDied","Data":"5c1a51d1a4745d924329cbe9adf744a91196a57d3755fc44e6d0d4bf5b03ee69"} Dec 06 05:33:57 crc kubenswrapper[4958]: I1206 05:33:57.077274 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerStarted","Data":"07b851065dc99b10ece293123450ba67860fd7bcf4715d931676b19f42e81bbb"} Dec 06 05:33:57 crc kubenswrapper[4958]: I1206 05:33:57.079668 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerStarted","Data":"6e8b9494c41e3cac3009e3cc0a8771460153d6bdb4ab5dbe64fbbc6ca2d88f3b"} Dec 06 05:33:58 crc kubenswrapper[4958]: I1206 05:33:58.086108 4958 generic.go:334] "Generic (PLEG): container finished" podID="26b187af-f0da-435e-9b8e-89086658d5b1" containerID="c06100eae0775e2d00b9fa58df1be9fcb66bb99d8c57552e510ce18864ccac8c" exitCode=0 Dec 06 05:33:58 crc kubenswrapper[4958]: I1206 05:33:58.086181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerDied","Data":"c06100eae0775e2d00b9fa58df1be9fcb66bb99d8c57552e510ce18864ccac8c"} Dec 06 05:33:58 crc kubenswrapper[4958]: I1206 05:33:58.087950 4958 generic.go:334] "Generic (PLEG): container finished" podID="17574228-062b-4c78-a16f-3e8a616e9a37" containerID="6e8b9494c41e3cac3009e3cc0a8771460153d6bdb4ab5dbe64fbbc6ca2d88f3b" exitCode=0 Dec 06 05:33:58 crc kubenswrapper[4958]: I1206 05:33:58.087976 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerDied","Data":"6e8b9494c41e3cac3009e3cc0a8771460153d6bdb4ab5dbe64fbbc6ca2d88f3b"} Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.096293 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerStarted","Data":"6a4faef06687916309c8f7c808e96a5114f9e19373ff3cefb914dd1085ddfd3f"} Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.103787 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerStarted","Data":"059eae096fa1aab6a11b11130f3e4e6724810e4464a6f962abfd26d98b45b36b"} Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.117727 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-q2qdm" podStartSLOduration=1.7133755229999998 podStartE2EDuration="4.117705707s" podCreationTimestamp="2025-12-06 05:33:55 +0000 UTC" firstStartedPulling="2025-12-06 05:33:56.071513831 +0000 UTC m=+346.605284594" lastFinishedPulling="2025-12-06 05:33:58.475844005 +0000 UTC m=+349.009614778" observedRunningTime="2025-12-06 05:33:59.113333802 +0000 UTC m=+349.647104565" watchObservedRunningTime="2025-12-06 05:33:59.117705707 +0000 UTC m=+349.651476480" Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.130937 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2997j" podStartSLOduration=2.661821728 podStartE2EDuration="4.130918156s" podCreationTimestamp="2025-12-06 05:33:55 +0000 UTC" firstStartedPulling="2025-12-06 05:33:57.078495688 +0000 UTC m=+347.612266451" lastFinishedPulling="2025-12-06 05:33:58.547592116 +0000 UTC m=+349.081362879" observedRunningTime="2025-12-06 05:33:59.130416982 +0000 UTC m=+349.664187745" watchObservedRunningTime="2025-12-06 05:33:59.130918156 +0000 UTC m=+349.664688919" Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.797463 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:33:59 crc kubenswrapper[4958]: I1206 05:33:59.798017 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" podUID="a023c338-584f-4c04-9a22-dc1bf45234c2" containerName="controller-manager" containerID="cri-o://0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e" gracePeriod=30 Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.707350 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.870678 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb8c9\" (UniqueName: \"kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9\") pod \"a023c338-584f-4c04-9a22-dc1bf45234c2\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.870786 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles\") pod \"a023c338-584f-4c04-9a22-dc1bf45234c2\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.870827 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca\") pod \"a023c338-584f-4c04-9a22-dc1bf45234c2\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.870855 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert\") pod \"a023c338-584f-4c04-9a22-dc1bf45234c2\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.870899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config\") pod \"a023c338-584f-4c04-9a22-dc1bf45234c2\" (UID: \"a023c338-584f-4c04-9a22-dc1bf45234c2\") " Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.871684 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca" (OuterVolumeSpecName: "client-ca") pod "a023c338-584f-4c04-9a22-dc1bf45234c2" (UID: "a023c338-584f-4c04-9a22-dc1bf45234c2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.871785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config" (OuterVolumeSpecName: "config") pod "a023c338-584f-4c04-9a22-dc1bf45234c2" (UID: "a023c338-584f-4c04-9a22-dc1bf45234c2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.872142 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a023c338-584f-4c04-9a22-dc1bf45234c2" (UID: "a023c338-584f-4c04-9a22-dc1bf45234c2"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.887809 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9" (OuterVolumeSpecName: "kube-api-access-rb8c9") pod "a023c338-584f-4c04-9a22-dc1bf45234c2" (UID: "a023c338-584f-4c04-9a22-dc1bf45234c2"). InnerVolumeSpecName "kube-api-access-rb8c9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.888457 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a023c338-584f-4c04-9a22-dc1bf45234c2" (UID: "a023c338-584f-4c04-9a22-dc1bf45234c2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.972156 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.972194 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb8c9\" (UniqueName: \"kubernetes.io/projected/a023c338-584f-4c04-9a22-dc1bf45234c2-kube-api-access-rb8c9\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.972205 4958 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.972213 4958 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a023c338-584f-4c04-9a22-dc1bf45234c2-client-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:00 crc kubenswrapper[4958]: I1206 05:34:00.972222 4958 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a023c338-584f-4c04-9a22-dc1bf45234c2-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.060111 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-48mm6"] Dec 06 05:34:01 crc kubenswrapper[4958]: E1206 05:34:01.060374 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a023c338-584f-4c04-9a22-dc1bf45234c2" containerName="controller-manager" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.060397 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a023c338-584f-4c04-9a22-dc1bf45234c2" containerName="controller-manager" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.060552 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a023c338-584f-4c04-9a22-dc1bf45234c2" containerName="controller-manager" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.061026 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.082693 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-48mm6"] Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.114072 4958 generic.go:334] "Generic (PLEG): container finished" podID="a023c338-584f-4c04-9a22-dc1bf45234c2" containerID="0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e" exitCode=0 Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.114139 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.114133 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" event={"ID":"a023c338-584f-4c04-9a22-dc1bf45234c2","Type":"ContainerDied","Data":"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e"} Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.114210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6799c5f44c-pwztl" event={"ID":"a023c338-584f-4c04-9a22-dc1bf45234c2","Type":"ContainerDied","Data":"32b10af34a11e5d64e2b84016ddc3bfa8d56c0adba8a2a86bbcf5a190ba37ac5"} Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.114235 4958 scope.go:117] "RemoveContainer" containerID="0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.131859 4958 scope.go:117] "RemoveContainer" containerID="0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e" Dec 06 05:34:01 crc kubenswrapper[4958]: E1206 05:34:01.132500 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e\": container with ID starting with 0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e not found: ID does not exist" containerID="0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.132524 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e"} err="failed to get container status \"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e\": rpc error: code = NotFound desc = could not find container \"0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e\": container with ID starting with 0ef385357edb7ff25f4a04294feb647dc14908c904e40facad8d374136a6f46e not found: ID does not exist" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.144332 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.147426 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6799c5f44c-pwztl"] Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174371 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-trusted-ca\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174434 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-registry-certificates\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174461 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-bound-sa-token\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174518 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174582 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f0def05c-b821-4b02-b11a-05934c06c36f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174605 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f0def05c-b821-4b02-b11a-05934c06c36f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174654 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8h95\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-kube-api-access-j8h95\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.174895 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-registry-tls\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.195280 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-registry-tls\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275721 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-trusted-ca\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275752 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-registry-certificates\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-bound-sa-token\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275836 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f0def05c-b821-4b02-b11a-05934c06c36f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275862 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f0def05c-b821-4b02-b11a-05934c06c36f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.275887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8h95\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-kube-api-access-j8h95\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.276991 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/f0def05c-b821-4b02-b11a-05934c06c36f-ca-trust-extracted\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.277550 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-registry-certificates\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.277771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f0def05c-b821-4b02-b11a-05934c06c36f-trusted-ca\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.280708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/f0def05c-b821-4b02-b11a-05934c06c36f-installation-pull-secrets\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.284279 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-registry-tls\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.299944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8h95\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-kube-api-access-j8h95\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.311881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f0def05c-b821-4b02-b11a-05934c06c36f-bound-sa-token\") pod \"image-registry-66df7c8f76-48mm6\" (UID: \"f0def05c-b821-4b02-b11a-05934c06c36f\") " pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.376918 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.571070 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-48mm6"] Dec 06 05:34:01 crc kubenswrapper[4958]: W1206 05:34:01.583808 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0def05c_b821_4b02_b11a_05934c06c36f.slice/crio-207b267b7285d072c88edd6292f9af942e97184d731798581deadc81e7ba9394 WatchSource:0}: Error finding container 207b267b7285d072c88edd6292f9af942e97184d731798581deadc81e7ba9394: Status 404 returned error can't find the container with id 207b267b7285d072c88edd6292f9af942e97184d731798581deadc81e7ba9394 Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.743097 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9b476f687-ggsv6"] Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.744053 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749149 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749220 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749275 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749304 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749327 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.749565 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.753351 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b476f687-ggsv6"] Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.758162 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.773293 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a023c338-584f-4c04-9a22-dc1bf45234c2" path="/var/lib/kubelet/pods/a023c338-584f-4c04-9a22-dc1bf45234c2/volumes" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.883533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-proxy-ca-bundles\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.883630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq7l\" (UniqueName: \"kubernetes.io/projected/17378f33-3d7d-4da4-9ba3-98416797972c-kube-api-access-ksq7l\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.883680 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17378f33-3d7d-4da4-9ba3-98416797972c-serving-cert\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.883735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-client-ca\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.883771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-config\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.985038 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksq7l\" (UniqueName: \"kubernetes.io/projected/17378f33-3d7d-4da4-9ba3-98416797972c-kube-api-access-ksq7l\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.985109 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17378f33-3d7d-4da4-9ba3-98416797972c-serving-cert\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.985139 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-client-ca\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.985170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-config\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.985204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-proxy-ca-bundles\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.986604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-client-ca\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.986654 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-proxy-ca-bundles\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.986791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17378f33-3d7d-4da4-9ba3-98416797972c-config\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.993416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17378f33-3d7d-4da4-9ba3-98416797972c-serving-cert\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:01 crc kubenswrapper[4958]: I1206 05:34:01.999440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksq7l\" (UniqueName: \"kubernetes.io/projected/17378f33-3d7d-4da4-9ba3-98416797972c-kube-api-access-ksq7l\") pod \"controller-manager-9b476f687-ggsv6\" (UID: \"17378f33-3d7d-4da4-9ba3-98416797972c\") " pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.065535 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.126226 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" event={"ID":"f0def05c-b821-4b02-b11a-05934c06c36f","Type":"ContainerStarted","Data":"074d4d2f9c10dbd5679d17cff3041a0a22a0d84fd7e55d16cd4c05e78143cad6"} Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.126277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" event={"ID":"f0def05c-b821-4b02-b11a-05934c06c36f","Type":"ContainerStarted","Data":"207b267b7285d072c88edd6292f9af942e97184d731798581deadc81e7ba9394"} Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.127224 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.154506 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" podStartSLOduration=1.154455823 podStartE2EDuration="1.154455823s" podCreationTimestamp="2025-12-06 05:34:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:34:02.153645529 +0000 UTC m=+352.687416312" watchObservedRunningTime="2025-12-06 05:34:02.154455823 +0000 UTC m=+352.688226586" Dec 06 05:34:02 crc kubenswrapper[4958]: I1206 05:34:02.264571 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9b476f687-ggsv6"] Dec 06 05:34:02 crc kubenswrapper[4958]: W1206 05:34:02.272425 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod17378f33_3d7d_4da4_9ba3_98416797972c.slice/crio-43128f582c51d87d5b45e6e16a5ccad192b2a852c1aa303380d63a209525ddf5 WatchSource:0}: Error finding container 43128f582c51d87d5b45e6e16a5ccad192b2a852c1aa303380d63a209525ddf5: Status 404 returned error can't find the container with id 43128f582c51d87d5b45e6e16a5ccad192b2a852c1aa303380d63a209525ddf5 Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.094146 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.095012 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.146862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" event={"ID":"17378f33-3d7d-4da4-9ba3-98416797972c","Type":"ContainerStarted","Data":"1e45bf60a47153151f5d992a8dd68d6c113d7ed4778752e021e0556373ce31cf"} Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.146934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" event={"ID":"17378f33-3d7d-4da4-9ba3-98416797972c","Type":"ContainerStarted","Data":"43128f582c51d87d5b45e6e16a5ccad192b2a852c1aa303380d63a209525ddf5"} Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.150889 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.171430 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" podStartSLOduration=4.171414216 podStartE2EDuration="4.171414216s" podCreationTimestamp="2025-12-06 05:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:34:03.169277264 +0000 UTC m=+353.703048027" watchObservedRunningTime="2025-12-06 05:34:03.171414216 +0000 UTC m=+353.705184979" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.198325 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.310439 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.311418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:34:03 crc kubenswrapper[4958]: I1206 05:34:03.379975 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:34:04 crc kubenswrapper[4958]: I1206 05:34:04.152465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:04 crc kubenswrapper[4958]: I1206 05:34:04.156416 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9b476f687-ggsv6" Dec 06 05:34:04 crc kubenswrapper[4958]: I1206 05:34:04.212085 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.473333 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.473777 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.515841 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.681461 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.681588 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:34:05 crc kubenswrapper[4958]: I1206 05:34:05.727401 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:34:06 crc kubenswrapper[4958]: I1206 05:34:06.200213 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:34:06 crc kubenswrapper[4958]: I1206 05:34:06.223245 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:34:21 crc kubenswrapper[4958]: I1206 05:34:21.384465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" Dec 06 05:34:21 crc kubenswrapper[4958]: I1206 05:34:21.451096 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:34:39 crc kubenswrapper[4958]: I1206 05:34:39.866408 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:34:39 crc kubenswrapper[4958]: I1206 05:34:39.866993 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:34:46 crc kubenswrapper[4958]: I1206 05:34:46.513029 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" podUID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" containerName="registry" containerID="cri-o://84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842" gracePeriod=30 Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.004789 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8d6j\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115144 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115175 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115207 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115383 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115429 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115498 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.115538 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets\") pod \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\" (UID: \"a4c40d5e-9b15-4c80-9a23-5047a6dc887c\") " Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.116223 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.119457 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.124366 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.124777 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.125237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.125517 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j" (OuterVolumeSpecName: "kube-api-access-b8d6j") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "kube-api-access-b8d6j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.128908 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.132852 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "a4c40d5e-9b15-4c80-9a23-5047a6dc887c" (UID: "a4c40d5e-9b15-4c80-9a23-5047a6dc887c"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216605 4958 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216642 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216653 4958 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216664 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8d6j\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-kube-api-access-b8d6j\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216673 4958 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216682 4958 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.216690 4958 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/a4c40d5e-9b15-4c80-9a23-5047a6dc887c-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.398923 4958 generic.go:334] "Generic (PLEG): container finished" podID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" containerID="84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842" exitCode=0 Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.398968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" event={"ID":"a4c40d5e-9b15-4c80-9a23-5047a6dc887c","Type":"ContainerDied","Data":"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842"} Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.399004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" event={"ID":"a4c40d5e-9b15-4c80-9a23-5047a6dc887c","Type":"ContainerDied","Data":"52202e4f6b1a9509319195c8a2d0b85b9813e86a5c7df2b31aeea719ac27dabf"} Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.399028 4958 scope.go:117] "RemoveContainer" containerID="84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.399149 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-rzshs" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.428342 4958 scope.go:117] "RemoveContainer" containerID="84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842" Dec 06 05:34:47 crc kubenswrapper[4958]: E1206 05:34:47.428850 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842\": container with ID starting with 84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842 not found: ID does not exist" containerID="84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.428971 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842"} err="failed to get container status \"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842\": rpc error: code = NotFound desc = could not find container \"84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842\": container with ID starting with 84e37542a147a3265bcd503b56b5b435185222ff4c5857a191ef9943108a8842 not found: ID does not exist" Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.451141 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.456774 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-rzshs"] Dec 06 05:34:47 crc kubenswrapper[4958]: I1206 05:34:47.771436 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" path="/var/lib/kubelet/pods/a4c40d5e-9b15-4c80-9a23-5047a6dc887c/volumes" Dec 06 05:35:09 crc kubenswrapper[4958]: I1206 05:35:09.865681 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:35:09 crc kubenswrapper[4958]: I1206 05:35:09.866138 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:35:39 crc kubenswrapper[4958]: I1206 05:35:39.865910 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:35:39 crc kubenswrapper[4958]: I1206 05:35:39.866419 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:35:39 crc kubenswrapper[4958]: I1206 05:35:39.866533 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:35:39 crc kubenswrapper[4958]: I1206 05:35:39.867279 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:35:39 crc kubenswrapper[4958]: I1206 05:35:39.867348 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589" gracePeriod=600 Dec 06 05:35:40 crc kubenswrapper[4958]: I1206 05:35:40.865730 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589" exitCode=0 Dec 06 05:35:40 crc kubenswrapper[4958]: I1206 05:35:40.865967 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589"} Dec 06 05:35:40 crc kubenswrapper[4958]: I1206 05:35:40.866389 4958 scope.go:117] "RemoveContainer" containerID="fd2256c5c47e4b4d5c0d27e0a687c531d3d4e2162c3439a57acb90784599b441" Dec 06 05:35:41 crc kubenswrapper[4958]: I1206 05:35:41.874755 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc"} Dec 06 05:38:09 crc kubenswrapper[4958]: I1206 05:38:09.866921 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:38:09 crc kubenswrapper[4958]: I1206 05:38:09.867823 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:38:39 crc kubenswrapper[4958]: I1206 05:38:39.866593 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:38:39 crc kubenswrapper[4958]: I1206 05:38:39.868300 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:39:09 crc kubenswrapper[4958]: I1206 05:39:09.866343 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:39:09 crc kubenswrapper[4958]: I1206 05:39:09.867276 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:39:09 crc kubenswrapper[4958]: I1206 05:39:09.867349 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:39:09 crc kubenswrapper[4958]: I1206 05:39:09.868257 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:39:09 crc kubenswrapper[4958]: I1206 05:39:09.868328 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc" gracePeriod=600 Dec 06 05:39:10 crc kubenswrapper[4958]: I1206 05:39:10.257539 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc" exitCode=0 Dec 06 05:39:10 crc kubenswrapper[4958]: I1206 05:39:10.257619 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc"} Dec 06 05:39:10 crc kubenswrapper[4958]: I1206 05:39:10.258195 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec"} Dec 06 05:39:10 crc kubenswrapper[4958]: I1206 05:39:10.258223 4958 scope.go:117] "RemoveContainer" containerID="c13fab5817b994a3289049eb9239d18e450fee0a186ac2014829ccee71c35589" Dec 06 05:40:43 crc kubenswrapper[4958]: I1206 05:40:43.361091 4958 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.836594 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gh5kc"] Dec 06 05:41:35 crc kubenswrapper[4958]: E1206 05:41:35.837335 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" containerName="registry" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.837351 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" containerName="registry" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.837494 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c40d5e-9b15-4c80-9a23-5047a6dc887c" containerName="registry" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.837934 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.843959 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-ng7fs" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.844166 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.844620 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.845033 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d2clt"] Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.845675 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d2clt" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.848732 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-6hlpb" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.849435 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gh5kc"] Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.857664 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d2clt"] Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.873890 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wj2z"] Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.874669 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.882522 4958 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5xbb5" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.892952 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wj2z"] Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.956978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws2w\" (UniqueName: \"kubernetes.io/projected/101b4063-65a7-47e2-8cda-8ed8bd230ae9-kube-api-access-xws2w\") pod \"cert-manager-5b446d88c5-d2clt\" (UID: \"101b4063-65a7-47e2-8cda-8ed8bd230ae9\") " pod="cert-manager/cert-manager-5b446d88c5-d2clt" Dec 06 05:41:35 crc kubenswrapper[4958]: I1206 05:41:35.957239 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f58gl\" (UniqueName: \"kubernetes.io/projected/8f4fa77f-6eb8-4d40-bd13-e45e924b22b5-kube-api-access-f58gl\") pod \"cert-manager-cainjector-7f985d654d-gh5kc\" (UID: \"8f4fa77f-6eb8-4d40-bd13-e45e924b22b5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.058699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f58gl\" (UniqueName: \"kubernetes.io/projected/8f4fa77f-6eb8-4d40-bd13-e45e924b22b5-kube-api-access-f58gl\") pod \"cert-manager-cainjector-7f985d654d-gh5kc\" (UID: \"8f4fa77f-6eb8-4d40-bd13-e45e924b22b5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.059205 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xws2w\" (UniqueName: \"kubernetes.io/projected/101b4063-65a7-47e2-8cda-8ed8bd230ae9-kube-api-access-xws2w\") pod \"cert-manager-5b446d88c5-d2clt\" (UID: \"101b4063-65a7-47e2-8cda-8ed8bd230ae9\") " pod="cert-manager/cert-manager-5b446d88c5-d2clt" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.059503 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c26d\" (UniqueName: \"kubernetes.io/projected/466b443c-4255-40b9-9e46-1e7e6b1a526b-kube-api-access-2c26d\") pod \"cert-manager-webhook-5655c58dd6-2wj2z\" (UID: \"466b443c-4255-40b9-9e46-1e7e6b1a526b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.076781 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f58gl\" (UniqueName: \"kubernetes.io/projected/8f4fa77f-6eb8-4d40-bd13-e45e924b22b5-kube-api-access-f58gl\") pod \"cert-manager-cainjector-7f985d654d-gh5kc\" (UID: \"8f4fa77f-6eb8-4d40-bd13-e45e924b22b5\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.077623 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xws2w\" (UniqueName: \"kubernetes.io/projected/101b4063-65a7-47e2-8cda-8ed8bd230ae9-kube-api-access-xws2w\") pod \"cert-manager-5b446d88c5-d2clt\" (UID: \"101b4063-65a7-47e2-8cda-8ed8bd230ae9\") " pod="cert-manager/cert-manager-5b446d88c5-d2clt" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.160650 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.160966 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c26d\" (UniqueName: \"kubernetes.io/projected/466b443c-4255-40b9-9e46-1e7e6b1a526b-kube-api-access-2c26d\") pod \"cert-manager-webhook-5655c58dd6-2wj2z\" (UID: \"466b443c-4255-40b9-9e46-1e7e6b1a526b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.168224 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-d2clt" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.182958 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c26d\" (UniqueName: \"kubernetes.io/projected/466b443c-4255-40b9-9e46-1e7e6b1a526b-kube-api-access-2c26d\") pod \"cert-manager-webhook-5655c58dd6-2wj2z\" (UID: \"466b443c-4255-40b9-9e46-1e7e6b1a526b\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.187867 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.396386 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-d2clt"] Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.405352 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.431313 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-2wj2z"] Dec 06 05:41:36 crc kubenswrapper[4958]: W1206 05:41:36.435603 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod466b443c_4255_40b9_9e46_1e7e6b1a526b.slice/crio-f670430da2358d3f30723fc6d3ff0200c4e594cf501deb18d652f7c53e3f7282 WatchSource:0}: Error finding container f670430da2358d3f30723fc6d3ff0200c4e594cf501deb18d652f7c53e3f7282: Status 404 returned error can't find the container with id f670430da2358d3f30723fc6d3ff0200c4e594cf501deb18d652f7c53e3f7282 Dec 06 05:41:36 crc kubenswrapper[4958]: W1206 05:41:36.565544 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f4fa77f_6eb8_4d40_bd13_e45e924b22b5.slice/crio-f21b373ae7cb02db79238bd0787382cdc3307ea5bee1660951e030e5b5bab1c4 WatchSource:0}: Error finding container f21b373ae7cb02db79238bd0787382cdc3307ea5bee1660951e030e5b5bab1c4: Status 404 returned error can't find the container with id f21b373ae7cb02db79238bd0787382cdc3307ea5bee1660951e030e5b5bab1c4 Dec 06 05:41:36 crc kubenswrapper[4958]: I1206 05:41:36.567271 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-gh5kc"] Dec 06 05:41:37 crc kubenswrapper[4958]: I1206 05:41:37.130050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d2clt" event={"ID":"101b4063-65a7-47e2-8cda-8ed8bd230ae9","Type":"ContainerStarted","Data":"d65ae3a9ffa6183c432135a8574901d969aedd417a8f2a26801c927149edc725"} Dec 06 05:41:37 crc kubenswrapper[4958]: I1206 05:41:37.132238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" event={"ID":"8f4fa77f-6eb8-4d40-bd13-e45e924b22b5","Type":"ContainerStarted","Data":"f21b373ae7cb02db79238bd0787382cdc3307ea5bee1660951e030e5b5bab1c4"} Dec 06 05:41:37 crc kubenswrapper[4958]: I1206 05:41:37.139249 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" event={"ID":"466b443c-4255-40b9-9e46-1e7e6b1a526b","Type":"ContainerStarted","Data":"f670430da2358d3f30723fc6d3ff0200c4e594cf501deb18d652f7c53e3f7282"} Dec 06 05:41:39 crc kubenswrapper[4958]: I1206 05:41:39.150368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" event={"ID":"8f4fa77f-6eb8-4d40-bd13-e45e924b22b5","Type":"ContainerStarted","Data":"2c352fb76c653c89031e4661158b03504d38c72819ecd9b10c894cfec077946b"} Dec 06 05:41:39 crc kubenswrapper[4958]: I1206 05:41:39.166864 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-gh5kc" podStartSLOduration=1.901122534 podStartE2EDuration="4.166847837s" podCreationTimestamp="2025-12-06 05:41:35 +0000 UTC" firstStartedPulling="2025-12-06 05:41:36.572212462 +0000 UTC m=+807.105983225" lastFinishedPulling="2025-12-06 05:41:38.837937765 +0000 UTC m=+809.371708528" observedRunningTime="2025-12-06 05:41:39.166291153 +0000 UTC m=+809.700061916" watchObservedRunningTime="2025-12-06 05:41:39.166847837 +0000 UTC m=+809.700618600" Dec 06 05:41:39 crc kubenswrapper[4958]: I1206 05:41:39.866525 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:41:39 crc kubenswrapper[4958]: I1206 05:41:39.866576 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:41:41 crc kubenswrapper[4958]: I1206 05:41:41.161598 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-d2clt" event={"ID":"101b4063-65a7-47e2-8cda-8ed8bd230ae9","Type":"ContainerStarted","Data":"aff9b3da4ae8a18e15e6bdbd388d2202ff985f3eedc8c56ebcf9c711fc0c5b5d"} Dec 06 05:41:41 crc kubenswrapper[4958]: I1206 05:41:41.163211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" event={"ID":"466b443c-4255-40b9-9e46-1e7e6b1a526b","Type":"ContainerStarted","Data":"f3c72590390884cf385104c42669b17e5591a4ee6a0ef871b9b6154284fc04e5"} Dec 06 05:41:41 crc kubenswrapper[4958]: I1206 05:41:41.163368 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:41 crc kubenswrapper[4958]: I1206 05:41:41.181608 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-d2clt" podStartSLOduration=1.962002663 podStartE2EDuration="6.181579205s" podCreationTimestamp="2025-12-06 05:41:35 +0000 UTC" firstStartedPulling="2025-12-06 05:41:36.405154392 +0000 UTC m=+806.938925155" lastFinishedPulling="2025-12-06 05:41:40.624730934 +0000 UTC m=+811.158501697" observedRunningTime="2025-12-06 05:41:41.179021088 +0000 UTC m=+811.712791861" watchObservedRunningTime="2025-12-06 05:41:41.181579205 +0000 UTC m=+811.715349998" Dec 06 05:41:41 crc kubenswrapper[4958]: I1206 05:41:41.201353 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" podStartSLOduration=2.019494975 podStartE2EDuration="6.201331564s" podCreationTimestamp="2025-12-06 05:41:35 +0000 UTC" firstStartedPulling="2025-12-06 05:41:36.437597055 +0000 UTC m=+806.971367818" lastFinishedPulling="2025-12-06 05:41:40.619433644 +0000 UTC m=+811.153204407" observedRunningTime="2025-12-06 05:41:41.193822757 +0000 UTC m=+811.727593560" watchObservedRunningTime="2025-12-06 05:41:41.201331564 +0000 UTC m=+811.735102327" Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.190292 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-2wj2z" Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.436859 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f4swt"] Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437296 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-controller" containerID="cri-o://1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437340 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="sbdb" containerID="cri-o://90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437427 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-node" containerID="cri-o://9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437454 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="nbdb" containerID="cri-o://f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437420 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437534 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-acl-logging" containerID="cri-o://473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.437543 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="northd" containerID="cri-o://08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b" gracePeriod=30 Dec 06 05:41:46 crc kubenswrapper[4958]: I1206 05:41:46.474543 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" containerID="cri-o://73bec7ebc476d53009c9e97d60b92a1f63469eb422461d79727d6b5234b5ce4d" gracePeriod=30 Dec 06 05:41:47 crc kubenswrapper[4958]: I1206 05:41:47.202849 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:41:47 crc kubenswrapper[4958]: I1206 05:41:47.205182 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-acl-logging/0.log" Dec 06 05:41:47 crc kubenswrapper[4958]: I1206 05:41:47.205958 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2" exitCode=143 Dec 06 05:41:47 crc kubenswrapper[4958]: I1206 05:41:47.206000 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.215676 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/2.log" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.216785 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/1.log" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.216860 4958 generic.go:334] "Generic (PLEG): container finished" podID="fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7" containerID="1ab05298aa4f4178d78e78302f50c4335c0cbebb3325d23082e9d5818d092121" exitCode=2 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.216949 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerDied","Data":"1ab05298aa4f4178d78e78302f50c4335c0cbebb3325d23082e9d5818d092121"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.217004 4958 scope.go:117] "RemoveContainer" containerID="5ecb41e730cf6bf34bee3ac4577e76690a314f9466e2bd69d896f99091c3657c" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.218296 4958 scope.go:117] "RemoveContainer" containerID="1ab05298aa4f4178d78e78302f50c4335c0cbebb3325d23082e9d5818d092121" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.219435 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.223983 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-acl-logging/0.log" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.224792 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-controller/0.log" Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225351 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="73bec7ebc476d53009c9e97d60b92a1f63469eb422461d79727d6b5234b5ce4d" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225397 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225423 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225444 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225462 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225518 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144" exitCode=0 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225535 4958 generic.go:334] "Generic (PLEG): container finished" podID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerID="1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7" exitCode=143 Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225464 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"73bec7ebc476d53009c9e97d60b92a1f63469eb422461d79727d6b5234b5ce4d"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225628 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225655 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225680 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144"} Dec 06 05:41:48 crc kubenswrapper[4958]: I1206 05:41:48.225730 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7"} Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.168795 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.172036 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-acl-logging/0.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.172868 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-controller/0.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.173577 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.223984 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-ttdnm"] Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224193 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224206 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224220 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kubecfg-setup" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224225 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kubecfg-setup" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224233 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224239 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224248 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224254 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224263 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224269 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224275 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="sbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224280 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="sbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224287 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="nbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224293 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="nbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224299 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224304 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224310 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="northd" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224315 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="northd" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224325 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-acl-logging" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224331 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-acl-logging" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224340 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-node" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224346 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-node" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224356 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224362 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224444 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-node" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224455 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224462 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224485 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-acl-logging" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224492 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="nbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224499 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224507 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="sbdb" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224515 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovn-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224522 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="kube-rbac-proxy-ovn-metrics" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224527 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224534 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="northd" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.224623 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224629 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.224701 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" containerName="ovnkube-controller" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.226333 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.236804 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovnkube-controller/3.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237582 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237623 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237659 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237662 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash" (OuterVolumeSpecName: "host-slash") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237728 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237775 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237800 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237822 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237878 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237917 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237920 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237959 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237985 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237929 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238015 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238051 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238107 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5mzl\" (UniqueName: \"kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238138 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238161 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238182 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238202 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin\") pod \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\" (UID: \"4c75c3b8-96d9-442e-b3c4-92d10ad33929\") " Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237954 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log" (OuterVolumeSpecName: "node-log") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237979 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.237999 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238038 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238243 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238267 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238304 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238354 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238372 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket" (OuterVolumeSpecName: "log-socket") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rckv\" (UniqueName: \"kubernetes.io/projected/6cc73aad-5b1d-4274-8a12-c44571425c7c-kube-api-access-7rckv\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-var-lib-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-kubelet\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238486 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238517 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-slash\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-log-socket\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-config\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-systemd-units\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238613 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-netns\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238647 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-netd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238665 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovn-node-metrics-cert\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-etc-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-node-log\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-env-overrides\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238755 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-script-lib\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-ovn\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238794 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-systemd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238844 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-bin\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238879 4958 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238893 4958 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238905 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238917 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238929 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238940 4958 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238952 4958 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-node-log\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238963 4958 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238974 4958 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238986 4958 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.238997 4958 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-log-socket\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239007 4958 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239018 4958 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-slash\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239030 4958 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239536 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239573 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.239598 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.244126 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl" (OuterVolumeSpecName: "kube-api-access-k5mzl") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "kube-api-access-k5mzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.244723 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.248844 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-acl-logging/0.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.249554 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-f4swt_4c75c3b8-96d9-442e-b3c4-92d10ad33929/ovn-controller/0.log" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.249965 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" event={"ID":"4c75c3b8-96d9-442e-b3c4-92d10ad33929","Type":"ContainerDied","Data":"d14ef3a0c7a1099f2b0390fd834cc2f2a26e2fa3879638b38a5750eb3a9f184d"} Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.250068 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-f4swt" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.256057 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "4c75c3b8-96d9-442e-b3c4-92d10ad33929" (UID: "4c75c3b8-96d9-442e-b3c4-92d10ad33929"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-netd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340358 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovn-node-metrics-cert\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-etc-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-node-log\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-env-overrides\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-script-lib\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340455 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-ovn\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-systemd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340506 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340519 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-bin\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340535 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rckv\" (UniqueName: \"kubernetes.io/projected/6cc73aad-5b1d-4274-8a12-c44571425c7c-kube-api-access-7rckv\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-node-log\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340593 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-systemd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-ovn\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340636 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-bin\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340610 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-var-lib-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-run-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340553 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-var-lib-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340642 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-etc-openvswitch\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-cni-netd\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340816 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-kubelet\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-kubelet\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340903 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340913 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-slash\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.340972 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-slash\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341010 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-log-socket\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341072 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-log-socket\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341086 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-config\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-systemd-units\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341150 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341171 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-netns\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-env-overrides\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-systemd-units\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-ovn-kubernetes\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6cc73aad-5b1d-4274-8a12-c44571425c7c-host-run-netns\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341372 4958 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341389 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5mzl\" (UniqueName: \"kubernetes.io/projected/4c75c3b8-96d9-442e-b3c4-92d10ad33929-kube-api-access-k5mzl\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341402 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341416 4958 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341428 4958 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/4c75c3b8-96d9-442e-b3c4-92d10ad33929-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341439 4958 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/4c75c3b8-96d9-442e-b3c4-92d10ad33929-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341524 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-script-lib\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.341735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovnkube-config\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.344578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6cc73aad-5b1d-4274-8a12-c44571425c7c-ovn-node-metrics-cert\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.357287 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rckv\" (UniqueName: \"kubernetes.io/projected/6cc73aad-5b1d-4274-8a12-c44571425c7c-kube-api-access-7rckv\") pod \"ovnkube-node-ttdnm\" (UID: \"6cc73aad-5b1d-4274-8a12-c44571425c7c\") " pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.499202 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.518048 4958 scope.go:117] "RemoveContainer" containerID="73bec7ebc476d53009c9e97d60b92a1f63469eb422461d79727d6b5234b5ce4d" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.542001 4958 scope.go:117] "RemoveContainer" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:41:49 crc kubenswrapper[4958]: E1206 05:41:49.542396 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\": container with ID starting with 3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519 not found: ID does not exist" containerID="3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.542443 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519"} err="failed to get container status \"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\": rpc error: code = NotFound desc = could not find container \"3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519\": container with ID starting with 3511a58e1529f892844574e8b2faceca960108d44afb02aa4f882d19471c5519 not found: ID does not exist" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.542509 4958 scope.go:117] "RemoveContainer" containerID="90354a506bfb2c066060aaa6c9c48c8fe36acc7b3fb7f76905c7aa2622c8c93e" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.545341 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.566192 4958 scope.go:117] "RemoveContainer" containerID="f87617d1f4b9c7ad893a539ddf123f3e213d38e44c084441f565d6535078c7f0" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.588907 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f4swt"] Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.589982 4958 scope.go:117] "RemoveContainer" containerID="08f6d7aa9cdfe8211f420e0a25851c58e125bd09b9d90ad4724703cc1112e45b" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.595556 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-f4swt"] Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.603644 4958 scope.go:117] "RemoveContainer" containerID="e6e7454c626f79f7ef231e33e2917e2a65265fc21ccf0bb078c9b7f5b3cec240" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.621102 4958 scope.go:117] "RemoveContainer" containerID="9c7cf12b8a7e5da199c6d8c5bdeb41243c911cef6cab0bac81e407502a8ff144" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.634775 4958 scope.go:117] "RemoveContainer" containerID="473d669d19d2507144f95718dc331ea994a36254f7f7b6e2b262a0e9cb46bfc2" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.654826 4958 scope.go:117] "RemoveContainer" containerID="1ac1117aba254b7e048d9dd3feff2587d218f6d7a6f4ccbe455eb2b19e534aa7" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.673989 4958 scope.go:117] "RemoveContainer" containerID="f16219fb96bfa08d1c4e1fbacac446f07ea75a221d1a6a4eddadd9e587f75ea9" Dec 06 05:41:49 crc kubenswrapper[4958]: I1206 05:41:49.769352 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c75c3b8-96d9-442e-b3c4-92d10ad33929" path="/var/lib/kubelet/pods/4c75c3b8-96d9-442e-b3c4-92d10ad33929/volumes" Dec 06 05:41:50 crc kubenswrapper[4958]: I1206 05:41:50.257683 4958 generic.go:334] "Generic (PLEG): container finished" podID="6cc73aad-5b1d-4274-8a12-c44571425c7c" containerID="e393250b647f85a97cd5db8b6e5ce50ef517408c3dcd4f4a8f92ffe0e37c0d43" exitCode=0 Dec 06 05:41:50 crc kubenswrapper[4958]: I1206 05:41:50.257763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerDied","Data":"e393250b647f85a97cd5db8b6e5ce50ef517408c3dcd4f4a8f92ffe0e37c0d43"} Dec 06 05:41:50 crc kubenswrapper[4958]: I1206 05:41:50.257798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"c03c74582bd122aa1bdccef1cf0562672b4fdcc63221a3a7b8e66a1a2f342c4b"} Dec 06 05:41:50 crc kubenswrapper[4958]: I1206 05:41:50.260648 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-wr7h5_fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7/kube-multus/2.log" Dec 06 05:41:50 crc kubenswrapper[4958]: I1206 05:41:50.260698 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-wr7h5" event={"ID":"fcf2a0b3-cf73-4640-8e5e-c3b6a80beef7","Type":"ContainerStarted","Data":"82220b1d1db2066eec64d334397542674d4a0da50dc886c3bd6faac38fa32be2"} Dec 06 05:41:52 crc kubenswrapper[4958]: I1206 05:41:52.275059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"2e27072923a0ac6e4729228e0b153029c366ca228798845574627739b1fbd73c"} Dec 06 05:41:53 crc kubenswrapper[4958]: I1206 05:41:53.286402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"86c4e8a452c41502fef126198cc17933ee9de7310096c1554594dbe4d9cf115e"} Dec 06 05:41:54 crc kubenswrapper[4958]: I1206 05:41:54.292797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"6fd1449aa5b5f9ca25b7bd5117b8b61b95b5cca696ae77921dd57e9b257d8a7e"} Dec 06 05:41:55 crc kubenswrapper[4958]: I1206 05:41:55.299955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"6c5ad31f6b010b3bdee606ee216979259ef9ce63f66bbbcb0bddf9d2ff484b73"} Dec 06 05:41:56 crc kubenswrapper[4958]: I1206 05:41:56.307996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"8d7141a0d0d3628ccfcb6458719442f53429866c1cd27073157a06dc6ff25b49"} Dec 06 05:41:58 crc kubenswrapper[4958]: I1206 05:41:58.320276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"28b66180bdb7d4eacb4855cd4c1d4d75e2f97bbc2de9da4cf4d1baedad023373"} Dec 06 05:41:59 crc kubenswrapper[4958]: I1206 05:41:59.328073 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"d5ced7b6d32b29a16ee42309c294641dff6994ee88a97f8cc0d3fc3fa3372f37"} Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.341281 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" event={"ID":"6cc73aad-5b1d-4274-8a12-c44571425c7c","Type":"ContainerStarted","Data":"7991a9310a643f3511ded90047394c407f8510e29b1bbef9844634a1151a8460"} Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.342594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.342615 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.342634 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.365249 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.366952 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:01 crc kubenswrapper[4958]: I1206 05:42:01.382615 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" podStartSLOduration=12.382596792 podStartE2EDuration="12.382596792s" podCreationTimestamp="2025-12-06 05:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:42:01.377874137 +0000 UTC m=+831.911644930" watchObservedRunningTime="2025-12-06 05:42:01.382596792 +0000 UTC m=+831.916367555" Dec 06 05:42:09 crc kubenswrapper[4958]: I1206 05:42:09.866139 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:42:09 crc kubenswrapper[4958]: I1206 05:42:09.866632 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.407515 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz"] Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.409788 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.415194 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.418519 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz"] Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.586051 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.586086 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.586120 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qczsk\" (UniqueName: \"kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.687143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qczsk\" (UniqueName: \"kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.687251 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.687273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.687762 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.687943 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.721629 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qczsk\" (UniqueName: \"kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:16 crc kubenswrapper[4958]: I1206 05:42:16.732986 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:17 crc kubenswrapper[4958]: I1206 05:42:17.196330 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz"] Dec 06 05:42:17 crc kubenswrapper[4958]: I1206 05:42:17.422431 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerStarted","Data":"0ef3ffafe1c405dd5a475703a0424edcb5ac078b514098031abd4aaab826b944"} Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.737371 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.739161 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.756015 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.817229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkrl6\" (UniqueName: \"kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.817364 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.817400 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.917990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.918076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.918201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkrl6\" (UniqueName: \"kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.918929 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.919031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:18 crc kubenswrapper[4958]: I1206 05:42:18.937838 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkrl6\" (UniqueName: \"kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6\") pod \"redhat-operators-592g7\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:19 crc kubenswrapper[4958]: I1206 05:42:19.070932 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:19 crc kubenswrapper[4958]: I1206 05:42:19.255194 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:42:19 crc kubenswrapper[4958]: W1206 05:42:19.259960 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08e03fc4_2207_47f1_b34a_bbfc76b3ec4f.slice/crio-5bf8bec569b25d84c2ce488ab0e0a57d8dbe604d6a58c8107ccf8de4d8566e82 WatchSource:0}: Error finding container 5bf8bec569b25d84c2ce488ab0e0a57d8dbe604d6a58c8107ccf8de4d8566e82: Status 404 returned error can't find the container with id 5bf8bec569b25d84c2ce488ab0e0a57d8dbe604d6a58c8107ccf8de4d8566e82 Dec 06 05:42:19 crc kubenswrapper[4958]: I1206 05:42:19.435624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerStarted","Data":"5bf8bec569b25d84c2ce488ab0e0a57d8dbe604d6a58c8107ccf8de4d8566e82"} Dec 06 05:42:19 crc kubenswrapper[4958]: I1206 05:42:19.568669 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-ttdnm" Dec 06 05:42:24 crc kubenswrapper[4958]: I1206 05:42:24.468560 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerStarted","Data":"435e4d068ff07544448d63173a43f20b6334833cd845dab1d36116ff51cb840e"} Dec 06 05:42:25 crc kubenswrapper[4958]: I1206 05:42:25.476421 4958 generic.go:334] "Generic (PLEG): container finished" podID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerID="d98a29d7e9fe28ab7b4f8ca0ae6ec0944ffea92e0c8169ca05c084d93bcbc3de" exitCode=0 Dec 06 05:42:25 crc kubenswrapper[4958]: I1206 05:42:25.476501 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerDied","Data":"d98a29d7e9fe28ab7b4f8ca0ae6ec0944ffea92e0c8169ca05c084d93bcbc3de"} Dec 06 05:42:25 crc kubenswrapper[4958]: I1206 05:42:25.477936 4958 generic.go:334] "Generic (PLEG): container finished" podID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerID="435e4d068ff07544448d63173a43f20b6334833cd845dab1d36116ff51cb840e" exitCode=0 Dec 06 05:42:25 crc kubenswrapper[4958]: I1206 05:42:25.477952 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerDied","Data":"435e4d068ff07544448d63173a43f20b6334833cd845dab1d36116ff51cb840e"} Dec 06 05:42:27 crc kubenswrapper[4958]: I1206 05:42:27.489376 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerStarted","Data":"d0ace1f86ea7629a586b4ea6a92c1273b76d67c25fb919822301c5c7215ae802"} Dec 06 05:42:28 crc kubenswrapper[4958]: I1206 05:42:28.496620 4958 generic.go:334] "Generic (PLEG): container finished" podID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerID="d0ace1f86ea7629a586b4ea6a92c1273b76d67c25fb919822301c5c7215ae802" exitCode=0 Dec 06 05:42:28 crc kubenswrapper[4958]: I1206 05:42:28.496707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerDied","Data":"d0ace1f86ea7629a586b4ea6a92c1273b76d67c25fb919822301c5c7215ae802"} Dec 06 05:42:28 crc kubenswrapper[4958]: I1206 05:42:28.501710 4958 generic.go:334] "Generic (PLEG): container finished" podID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerID="2317a91a98bccefc26f5566a42dcf6887c1ffe2f5bdafbed98a1d9859fe84017" exitCode=0 Dec 06 05:42:28 crc kubenswrapper[4958]: I1206 05:42:28.501759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerDied","Data":"2317a91a98bccefc26f5566a42dcf6887c1ffe2f5bdafbed98a1d9859fe84017"} Dec 06 05:42:29 crc kubenswrapper[4958]: I1206 05:42:29.509590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerStarted","Data":"d7f2ecaafde73e75c955672f9bf666232a1b361434c578b5a62af4cc10981654"} Dec 06 05:42:29 crc kubenswrapper[4958]: I1206 05:42:29.530110 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" podStartSLOduration=11.960941317 podStartE2EDuration="13.530085577s" podCreationTimestamp="2025-12-06 05:42:16 +0000 UTC" firstStartedPulling="2025-12-06 05:42:26.484906993 +0000 UTC m=+857.018677766" lastFinishedPulling="2025-12-06 05:42:28.054051263 +0000 UTC m=+858.587822026" observedRunningTime="2025-12-06 05:42:29.525347142 +0000 UTC m=+860.059117905" watchObservedRunningTime="2025-12-06 05:42:29.530085577 +0000 UTC m=+860.063856360" Dec 06 05:42:31 crc kubenswrapper[4958]: I1206 05:42:31.522655 4958 generic.go:334] "Generic (PLEG): container finished" podID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerID="d7f2ecaafde73e75c955672f9bf666232a1b361434c578b5a62af4cc10981654" exitCode=0 Dec 06 05:42:31 crc kubenswrapper[4958]: I1206 05:42:31.522732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerDied","Data":"d7f2ecaafde73e75c955672f9bf666232a1b361434c578b5a62af4cc10981654"} Dec 06 05:42:32 crc kubenswrapper[4958]: I1206 05:42:32.532282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerStarted","Data":"257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7"} Dec 06 05:42:32 crc kubenswrapper[4958]: I1206 05:42:32.808747 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:32 crc kubenswrapper[4958]: I1206 05:42:32.993372 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qczsk\" (UniqueName: \"kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk\") pod \"91c2a000-71d7-41c9-8882-fe2867aad8d9\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " Dec 06 05:42:32 crc kubenswrapper[4958]: I1206 05:42:32.993500 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle\") pod \"91c2a000-71d7-41c9-8882-fe2867aad8d9\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " Dec 06 05:42:32 crc kubenswrapper[4958]: I1206 05:42:32.993570 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util\") pod \"91c2a000-71d7-41c9-8882-fe2867aad8d9\" (UID: \"91c2a000-71d7-41c9-8882-fe2867aad8d9\") " Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.004512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util" (OuterVolumeSpecName: "util") pod "91c2a000-71d7-41c9-8882-fe2867aad8d9" (UID: "91c2a000-71d7-41c9-8882-fe2867aad8d9"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.005683 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle" (OuterVolumeSpecName: "bundle") pod "91c2a000-71d7-41c9-8882-fe2867aad8d9" (UID: "91c2a000-71d7-41c9-8882-fe2867aad8d9"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.011268 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk" (OuterVolumeSpecName: "kube-api-access-qczsk") pod "91c2a000-71d7-41c9-8882-fe2867aad8d9" (UID: "91c2a000-71d7-41c9-8882-fe2867aad8d9"). InnerVolumeSpecName "kube-api-access-qczsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.095263 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qczsk\" (UniqueName: \"kubernetes.io/projected/91c2a000-71d7-41c9-8882-fe2867aad8d9-kube-api-access-qczsk\") on node \"crc\" DevicePath \"\"" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.095956 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.095992 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/91c2a000-71d7-41c9-8882-fe2867aad8d9-util\") on node \"crc\" DevicePath \"\"" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.541837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" event={"ID":"91c2a000-71d7-41c9-8882-fe2867aad8d9","Type":"ContainerDied","Data":"0ef3ffafe1c405dd5a475703a0424edcb5ac078b514098031abd4aaab826b944"} Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.541881 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ef3ffafe1c405dd5a475703a0424edcb5ac078b514098031abd4aaab826b944" Dec 06 05:42:33 crc kubenswrapper[4958]: I1206 05:42:33.541912 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz" Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.608130 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-592g7" podStartSLOduration=16.326344772 podStartE2EDuration="21.608111991s" podCreationTimestamp="2025-12-06 05:42:18 +0000 UTC" firstStartedPulling="2025-12-06 05:42:26.485483729 +0000 UTC m=+857.019254492" lastFinishedPulling="2025-12-06 05:42:31.767250928 +0000 UTC m=+862.301021711" observedRunningTime="2025-12-06 05:42:39.60807875 +0000 UTC m=+870.141849513" watchObservedRunningTime="2025-12-06 05:42:39.608111991 +0000 UTC m=+870.141882754" Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.866264 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.866340 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.866391 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.867055 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:42:39 crc kubenswrapper[4958]: I1206 05:42:39.867125 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec" gracePeriod=600 Dec 06 05:42:40 crc kubenswrapper[4958]: E1206 05:42:40.244355 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13528c0_da5d_4d55_9155_2c29c33edfc4.slice/crio-conmon-54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:42:40 crc kubenswrapper[4958]: I1206 05:42:40.580358 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec" exitCode=0 Dec 06 05:42:40 crc kubenswrapper[4958]: I1206 05:42:40.580625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec"} Dec 06 05:42:40 crc kubenswrapper[4958]: I1206 05:42:40.580657 4958 scope.go:117] "RemoveContainer" containerID="1236d18f68c8851c320373ee86fe651a1f296f33f07d8ecc4e000f5e2ab900bc" Dec 06 05:42:42 crc kubenswrapper[4958]: I1206 05:42:42.594973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7"} Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.754210 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79"] Dec 06 05:42:43 crc kubenswrapper[4958]: E1206 05:42:43.754828 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="pull" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.754845 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="pull" Dec 06 05:42:43 crc kubenswrapper[4958]: E1206 05:42:43.754854 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="extract" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.754862 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="extract" Dec 06 05:42:43 crc kubenswrapper[4958]: E1206 05:42:43.754879 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="util" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.754889 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="util" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.754996 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" containerName="extract" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.755466 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.757563 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.757685 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-8vdlz" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.757935 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.770377 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79"] Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.835953 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdph\" (UniqueName: \"kubernetes.io/projected/102c7aef-7a9f-4838-817d-a410a9e1cea1-kube-api-access-xqdph\") pod \"obo-prometheus-operator-668cf9dfbb-5kc79\" (UID: \"102c7aef-7a9f-4838-817d-a410a9e1cea1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.866008 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st"] Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.866745 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.869295 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.869489 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-9shc5" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.877056 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg"] Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.877749 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.885528 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st"] Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.907539 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg"] Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.937422 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdph\" (UniqueName: \"kubernetes.io/projected/102c7aef-7a9f-4838-817d-a410a9e1cea1-kube-api-access-xqdph\") pod \"obo-prometheus-operator-668cf9dfbb-5kc79\" (UID: \"102c7aef-7a9f-4838-817d-a410a9e1cea1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" Dec 06 05:42:43 crc kubenswrapper[4958]: I1206 05:42:43.960023 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdph\" (UniqueName: \"kubernetes.io/projected/102c7aef-7a9f-4838-817d-a410a9e1cea1-kube-api-access-xqdph\") pod \"obo-prometheus-operator-668cf9dfbb-5kc79\" (UID: \"102c7aef-7a9f-4838-817d-a410a9e1cea1\") " pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.038248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.038679 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.038750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.038773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.073730 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-g69b7"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.073931 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.074909 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.077282 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-wbbh9" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.077582 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.093132 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-g69b7"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139614 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139656 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c280fe1d-3450-44ce-91c6-690601d34e98-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139688 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139727 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7pbr\" (UniqueName: \"kubernetes.io/projected/c280fe1d-3450-44ce-91c6-690601d34e98-kube-api-access-d7pbr\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139768 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.139791 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.148107 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.148142 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/12946b50-3866-4458-a26a-23987fdc0c1c-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg\" (UID: \"12946b50-3866-4458-a26a-23987fdc0c1c\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.155689 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.171414 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9bdcc25c-5699-422e-9d03-b00a80ec8efa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st\" (UID: \"9bdcc25c-5699-422e-9d03-b00a80ec8efa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.180451 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.191716 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.246160 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c280fe1d-3450-44ce-91c6-690601d34e98-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.246244 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7pbr\" (UniqueName: \"kubernetes.io/projected/c280fe1d-3450-44ce-91c6-690601d34e98-kube-api-access-d7pbr\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.258198 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c280fe1d-3450-44ce-91c6-690601d34e98-observability-operator-tls\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.267901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7pbr\" (UniqueName: \"kubernetes.io/projected/c280fe1d-3450-44ce-91c6-690601d34e98-kube-api-access-d7pbr\") pod \"observability-operator-d8bb48f5d-g69b7\" (UID: \"c280fe1d-3450-44ce-91c6-690601d34e98\") " pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.305048 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgdwb"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.305801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.316033 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-4qwj9" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.322293 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgdwb"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.440788 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.460902 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5825ccb-1463-4fc4-87c9-504ef6195da6-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.461015 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6nhm\" (UniqueName: \"kubernetes.io/projected/a5825ccb-1463-4fc4-87c9-504ef6195da6-kube-api-access-h6nhm\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.507080 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.516135 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.557442 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st"] Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.563775 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6nhm\" (UniqueName: \"kubernetes.io/projected/a5825ccb-1463-4fc4-87c9-504ef6195da6-kube-api-access-h6nhm\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.563831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5825ccb-1463-4fc4-87c9-504ef6195da6-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.565070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/a5825ccb-1463-4fc4-87c9-504ef6195da6-openshift-service-ca\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.594548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6nhm\" (UniqueName: \"kubernetes.io/projected/a5825ccb-1463-4fc4-87c9-504ef6195da6-kube-api-access-h6nhm\") pod \"perses-operator-5446b9c989-qgdwb\" (UID: \"a5825ccb-1463-4fc4-87c9-504ef6195da6\") " pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.614296 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" event={"ID":"12946b50-3866-4458-a26a-23987fdc0c1c","Type":"ContainerStarted","Data":"55fd91dfd0eacc159b8d4c3ab040e2db3c992ef6d8f91d18f9aab795e500b4b2"} Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.615293 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" event={"ID":"9bdcc25c-5699-422e-9d03-b00a80ec8efa","Type":"ContainerStarted","Data":"4207a94cdb85a10d383879393158d1a7a5fd246ae292df91888fa295fc6ae7b0"} Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.615999 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" event={"ID":"102c7aef-7a9f-4838-817d-a410a9e1cea1","Type":"ContainerStarted","Data":"c061444605d4dd7bd08c9ab41db0ab6d1d47d1944397bca10afad55de542517f"} Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.650080 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.696970 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-d8bb48f5d-g69b7"] Dec 06 05:42:44 crc kubenswrapper[4958]: W1206 05:42:44.711385 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc280fe1d_3450_44ce_91c6_690601d34e98.slice/crio-5e5f8420320fee9d561baeb7fd991c9a4f6b2eb7cd39de7da9d85a868b29f722 WatchSource:0}: Error finding container 5e5f8420320fee9d561baeb7fd991c9a4f6b2eb7cd39de7da9d85a868b29f722: Status 404 returned error can't find the container with id 5e5f8420320fee9d561baeb7fd991c9a4f6b2eb7cd39de7da9d85a868b29f722 Dec 06 05:42:44 crc kubenswrapper[4958]: I1206 05:42:44.891432 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5446b9c989-qgdwb"] Dec 06 05:42:44 crc kubenswrapper[4958]: W1206 05:42:44.901441 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda5825ccb_1463_4fc4_87c9_504ef6195da6.slice/crio-495f84098cfaf3ed35f2b1ecc3b5ccb54bcc8fcd3d7b541d36dc12e8a58b7f1b WatchSource:0}: Error finding container 495f84098cfaf3ed35f2b1ecc3b5ccb54bcc8fcd3d7b541d36dc12e8a58b7f1b: Status 404 returned error can't find the container with id 495f84098cfaf3ed35f2b1ecc3b5ccb54bcc8fcd3d7b541d36dc12e8a58b7f1b Dec 06 05:42:45 crc kubenswrapper[4958]: I1206 05:42:45.623202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" event={"ID":"a5825ccb-1463-4fc4-87c9-504ef6195da6","Type":"ContainerStarted","Data":"495f84098cfaf3ed35f2b1ecc3b5ccb54bcc8fcd3d7b541d36dc12e8a58b7f1b"} Dec 06 05:42:45 crc kubenswrapper[4958]: I1206 05:42:45.625905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" event={"ID":"c280fe1d-3450-44ce-91c6-690601d34e98","Type":"ContainerStarted","Data":"5e5f8420320fee9d561baeb7fd991c9a4f6b2eb7cd39de7da9d85a868b29f722"} Dec 06 05:42:49 crc kubenswrapper[4958]: I1206 05:42:49.071738 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:49 crc kubenswrapper[4958]: I1206 05:42:49.072055 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:49 crc kubenswrapper[4958]: I1206 05:42:49.124615 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:49 crc kubenswrapper[4958]: I1206 05:42:49.708995 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:42:49 crc kubenswrapper[4958]: I1206 05:42:49.941517 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:42:51 crc kubenswrapper[4958]: I1206 05:42:51.679292 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-592g7" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" containerID="cri-o://257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" gracePeriod=2 Dec 06 05:42:53 crc kubenswrapper[4958]: I1206 05:42:53.692884 4958 generic.go:334] "Generic (PLEG): container finished" podID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" exitCode=0 Dec 06 05:42:53 crc kubenswrapper[4958]: I1206 05:42:53.692946 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerDied","Data":"257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7"} Dec 06 05:42:59 crc kubenswrapper[4958]: E1206 05:42:59.072220 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:42:59 crc kubenswrapper[4958]: E1206 05:42:59.073049 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:42:59 crc kubenswrapper[4958]: E1206 05:42:59.074144 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:42:59 crc kubenswrapper[4958]: E1206 05:42:59.074184 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-592g7" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" Dec 06 05:43:01 crc kubenswrapper[4958]: E1206 05:43:01.885677 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec" Dec 06 05:43:01 crc kubenswrapper[4958]: E1206 05:43:01.886339 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg_openshift-operators(12946b50-3866-4458-a26a-23987fdc0c1c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:43:01 crc kubenswrapper[4958]: E1206 05:43:01.887563 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" podUID="12946b50-3866-4458-a26a-23987fdc0c1c" Dec 06 05:43:02 crc kubenswrapper[4958]: E1206 05:43:02.749902 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:43d33f0125e6b990f4a972ac4e952a065d7e72dc1690c6c836963b7341734aec\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" podUID="12946b50-3866-4458-a26a-23987fdc0c1c" Dec 06 05:43:09 crc kubenswrapper[4958]: E1206 05:43:09.071949 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:43:09 crc kubenswrapper[4958]: E1206 05:43:09.072960 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:43:09 crc kubenswrapper[4958]: E1206 05:43:09.073755 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:43:09 crc kubenswrapper[4958]: E1206 05:43:09.073821 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-592g7" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" Dec 06 05:43:10 crc kubenswrapper[4958]: I1206 05:43:10.920844 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:43:10 crc kubenswrapper[4958]: I1206 05:43:10.997298 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkrl6\" (UniqueName: \"kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6\") pod \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " Dec 06 05:43:10 crc kubenswrapper[4958]: I1206 05:43:10.997351 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content\") pod \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " Dec 06 05:43:10 crc kubenswrapper[4958]: I1206 05:43:10.997397 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities\") pod \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\" (UID: \"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f\") " Dec 06 05:43:10 crc kubenswrapper[4958]: I1206 05:43:10.998570 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities" (OuterVolumeSpecName: "utilities") pod "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" (UID: "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.004175 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6" (OuterVolumeSpecName: "kube-api-access-pkrl6") pod "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" (UID: "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f"). InnerVolumeSpecName "kube-api-access-pkrl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.098093 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.098362 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkrl6\" (UniqueName: \"kubernetes.io/projected/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-kube-api-access-pkrl6\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.100604 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" (UID: "08e03fc4-2207-47f1-b34a-bbfc76b3ec4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.199933 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.794606 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-592g7" event={"ID":"08e03fc4-2207-47f1-b34a-bbfc76b3ec4f","Type":"ContainerDied","Data":"5bf8bec569b25d84c2ce488ab0e0a57d8dbe604d6a58c8107ccf8de4d8566e82"} Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.794663 4958 scope.go:117] "RemoveContainer" containerID="257ed477fc8107c9d8598991a906fd70cadb0b656b337ed07008503957704fc7" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.794711 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-592g7" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.824186 4958 scope.go:117] "RemoveContainer" containerID="d0ace1f86ea7629a586b4ea6a92c1273b76d67c25fb919822301c5c7215ae802" Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.861904 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.867493 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-592g7"] Dec 06 05:43:11 crc kubenswrapper[4958]: I1206 05:43:11.870667 4958 scope.go:117] "RemoveContainer" containerID="d98a29d7e9fe28ab7b4f8ca0ae6ec0944ffea92e0c8169ca05c084d93bcbc3de" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.769606 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" path="/var/lib/kubelet/pods/08e03fc4-2207-47f1-b34a-bbfc76b3ec4f/volumes" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.807940 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" event={"ID":"a5825ccb-1463-4fc4-87c9-504ef6195da6","Type":"ContainerStarted","Data":"93b7ab14e60bed9e11a7a5c254f370af654e3bb1dfcdc3c470e642bbd7c1d8ff"} Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.808011 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.809768 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" event={"ID":"c280fe1d-3450-44ce-91c6-690601d34e98","Type":"ContainerStarted","Data":"65880dac9e1ffb9a163ca9965ab8099ea05eb4d8cd2c2191ba815ea4ef9821e0"} Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.809975 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.811812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" event={"ID":"102c7aef-7a9f-4838-817d-a410a9e1cea1","Type":"ContainerStarted","Data":"e4c3cc487d9ffdbeb63a3c9c32841956dfd1aa004caab6eb8c183b07e08b7ea1"} Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.813328 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" event={"ID":"9bdcc25c-5699-422e-9d03-b00a80ec8efa","Type":"ContainerStarted","Data":"cd118d38fc19aeff8b4d3ff3e47bdd9ada71f76c892a85134b60d4ee2731f29a"} Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.828662 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.829284 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" podStartSLOduration=2.980574415 podStartE2EDuration="29.829267968s" podCreationTimestamp="2025-12-06 05:42:44 +0000 UTC" firstStartedPulling="2025-12-06 05:42:44.904413717 +0000 UTC m=+875.438184490" lastFinishedPulling="2025-12-06 05:43:11.75310728 +0000 UTC m=+902.286878043" observedRunningTime="2025-12-06 05:43:13.824872371 +0000 UTC m=+904.358643134" watchObservedRunningTime="2025-12-06 05:43:13.829267968 +0000 UTC m=+904.363038731" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.850689 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-d8bb48f5d-g69b7" podStartSLOduration=2.769108249 podStartE2EDuration="29.850671757s" podCreationTimestamp="2025-12-06 05:42:44 +0000 UTC" firstStartedPulling="2025-12-06 05:42:44.719599721 +0000 UTC m=+875.253370484" lastFinishedPulling="2025-12-06 05:43:11.801163229 +0000 UTC m=+902.334933992" observedRunningTime="2025-12-06 05:43:13.845658534 +0000 UTC m=+904.379429307" watchObservedRunningTime="2025-12-06 05:43:13.850671757 +0000 UTC m=+904.384442520" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.868059 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st" podStartSLOduration=3.719024976 podStartE2EDuration="30.868036039s" podCreationTimestamp="2025-12-06 05:42:43 +0000 UTC" firstStartedPulling="2025-12-06 05:42:44.605276119 +0000 UTC m=+875.139046882" lastFinishedPulling="2025-12-06 05:43:11.754287182 +0000 UTC m=+902.288057945" observedRunningTime="2025-12-06 05:43:13.865789929 +0000 UTC m=+904.399560692" watchObservedRunningTime="2025-12-06 05:43:13.868036039 +0000 UTC m=+904.401806822" Dec 06 05:43:13 crc kubenswrapper[4958]: I1206 05:43:13.881879 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-668cf9dfbb-5kc79" podStartSLOduration=3.662116453 podStartE2EDuration="30.881863637s" podCreationTimestamp="2025-12-06 05:42:43 +0000 UTC" firstStartedPulling="2025-12-06 05:42:44.532875113 +0000 UTC m=+875.066645876" lastFinishedPulling="2025-12-06 05:43:11.752622297 +0000 UTC m=+902.286393060" observedRunningTime="2025-12-06 05:43:13.880947443 +0000 UTC m=+904.414718216" watchObservedRunningTime="2025-12-06 05:43:13.881863637 +0000 UTC m=+904.415634400" Dec 06 05:43:16 crc kubenswrapper[4958]: I1206 05:43:16.829442 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" event={"ID":"12946b50-3866-4458-a26a-23987fdc0c1c","Type":"ContainerStarted","Data":"4c98112395dd50fa581f2a0f5e4d41f939664a97a8ddd41cbf4e2ad9e7a3163d"} Dec 06 05:43:16 crc kubenswrapper[4958]: I1206 05:43:16.849375 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg" podStartSLOduration=-9223372003.00542 podStartE2EDuration="33.849354995s" podCreationTimestamp="2025-12-06 05:42:43 +0000 UTC" firstStartedPulling="2025-12-06 05:42:44.532588125 +0000 UTC m=+875.066358888" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:43:16.847820354 +0000 UTC m=+907.381591137" watchObservedRunningTime="2025-12-06 05:43:16.849354995 +0000 UTC m=+907.383125758" Dec 06 05:43:24 crc kubenswrapper[4958]: I1206 05:43:24.653535 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5446b9c989-qgdwb" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.877371 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:37 crc kubenswrapper[4958]: E1206 05:43:37.878277 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="extract-content" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.878293 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="extract-content" Dec 06 05:43:37 crc kubenswrapper[4958]: E1206 05:43:37.878311 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="extract-utilities" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.878320 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="extract-utilities" Dec 06 05:43:37 crc kubenswrapper[4958]: E1206 05:43:37.878335 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.878344 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.878522 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="08e03fc4-2207-47f1-b34a-bbfc76b3ec4f" containerName="registry-server" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.879675 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.888744 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.935518 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8v7hj\" (UniqueName: \"kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.935660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:37 crc kubenswrapper[4958]: I1206 05:43:37.935711 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.041221 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8v7hj\" (UniqueName: \"kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.041285 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.041315 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.041877 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.041976 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.057899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8v7hj\" (UniqueName: \"kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj\") pod \"redhat-marketplace-24wcs\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.203132 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.472111 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:38 crc kubenswrapper[4958]: I1206 05:43:38.950315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerStarted","Data":"ce3626785623c6fd41b0e05d21502412c81f41c83e5d2c9f60daf9967cb18b7e"} Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:41.809351 4958 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod08e03fc4-2207-47f1-b34a-bbfc76b3ec4f"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod08e03fc4-2207-47f1-b34a-bbfc76b3ec4f] : Timed out while waiting for systemd to remove kubepods-burstable-pod08e03fc4_2207_47f1_b34a_bbfc76b3ec4f.slice" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.816027 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.840284 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.847744 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.862599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.862672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qk6q\" (UniqueName: \"kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.862704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.963812 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.963907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qk6q\" (UniqueName: \"kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.963950 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.964405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.964604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.993334 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qk6q\" (UniqueName: \"kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q\") pod \"community-operators-kqwg7\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:46 crc kubenswrapper[4958]: I1206 05:43:46.998386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerStarted","Data":"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d"} Dec 06 05:43:47 crc kubenswrapper[4958]: I1206 05:43:47.171979 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:47 crc kubenswrapper[4958]: I1206 05:43:47.650416 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:48 crc kubenswrapper[4958]: I1206 05:43:48.005902 4958 generic.go:334] "Generic (PLEG): container finished" podID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerID="3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d" exitCode=0 Dec 06 05:43:48 crc kubenswrapper[4958]: I1206 05:43:48.005982 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerDied","Data":"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d"} Dec 06 05:43:48 crc kubenswrapper[4958]: I1206 05:43:48.008282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerStarted","Data":"023b0dab9a71bc1f1ef6b392e773e0905fd52d5c5c5285a1ea909a4da4e2ba9b"} Dec 06 05:43:49 crc kubenswrapper[4958]: I1206 05:43:49.022863 4958 generic.go:334] "Generic (PLEG): container finished" podID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerID="4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6" exitCode=0 Dec 06 05:43:49 crc kubenswrapper[4958]: I1206 05:43:49.022907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerDied","Data":"4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6"} Dec 06 05:43:51 crc kubenswrapper[4958]: I1206 05:43:51.036299 4958 generic.go:334] "Generic (PLEG): container finished" podID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerID="f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a" exitCode=0 Dec 06 05:43:51 crc kubenswrapper[4958]: I1206 05:43:51.036349 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerDied","Data":"f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a"} Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.052334 4958 generic.go:334] "Generic (PLEG): container finished" podID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerID="14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7" exitCode=0 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.052385 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerDied","Data":"14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7"} Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.579344 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.583067 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c9210rq4rz"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.592384 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.592701 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cmpmq" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="registry-server" containerID="cri-o://2aac1c9125d944ed8a33154ede921d66e68daf809e990e27176f483eb75cae20" gracePeriod=30 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.597414 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.597693 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6rr8p" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="registry-server" containerID="cri-o://7fdffaea75e159fc72cf54d4df5d2f195817b6d725dede699b8c3e70b5f08919" gracePeriod=30 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.610794 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.625130 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.625445 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" containerName="marketplace-operator" containerID="cri-o://2fabc7d166650c6c84cee6b9a8a9314b178b87c1d83b580a9b9ed91af392d938" gracePeriod=30 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.633021 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.642084 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.642602 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2997j" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="registry-server" containerID="cri-o://059eae096fa1aab6a11b11130f3e4e6724810e4464a6f962abfd26d98b45b36b" gracePeriod=30 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.646606 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.646884 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-q2qdm" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="registry-server" containerID="cri-o://6a4faef06687916309c8f7c808e96a5114f9e19373ff3cefb914dd1085ddfd3f" gracePeriod=30 Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.654920 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hjnl5"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.656892 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.677281 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hjnl5"] Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.751451 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.751551 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht4b9\" (UniqueName: \"kubernetes.io/projected/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-kube-api-access-ht4b9\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.751653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.770196 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91c2a000-71d7-41c9-8882-fe2867aad8d9" path="/var/lib/kubelet/pods/91c2a000-71d7-41c9-8882-fe2867aad8d9/volumes" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.852510 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.852604 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.852624 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht4b9\" (UniqueName: \"kubernetes.io/projected/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-kube-api-access-ht4b9\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.854851 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.858836 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.868426 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht4b9\" (UniqueName: \"kubernetes.io/projected/fa140104-7b6d-4d2b-a5b8-fba1696d1a94-kube-api-access-ht4b9\") pod \"marketplace-operator-79b997595-hjnl5\" (UID: \"fa140104-7b6d-4d2b-a5b8-fba1696d1a94\") " pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:53 crc kubenswrapper[4958]: I1206 05:43:53.970801 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.067910 4958 generic.go:334] "Generic (PLEG): container finished" podID="17574228-062b-4c78-a16f-3e8a616e9a37" containerID="6a4faef06687916309c8f7c808e96a5114f9e19373ff3cefb914dd1085ddfd3f" exitCode=0 Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.068210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerDied","Data":"6a4faef06687916309c8f7c808e96a5114f9e19373ff3cefb914dd1085ddfd3f"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.073680 4958 generic.go:334] "Generic (PLEG): container finished" podID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerID="2aac1c9125d944ed8a33154ede921d66e68daf809e990e27176f483eb75cae20" exitCode=0 Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.073751 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerDied","Data":"2aac1c9125d944ed8a33154ede921d66e68daf809e990e27176f483eb75cae20"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.080843 4958 generic.go:334] "Generic (PLEG): container finished" podID="176e1833-ad7f-40a8-8179-846a546a6fad" containerID="2fabc7d166650c6c84cee6b9a8a9314b178b87c1d83b580a9b9ed91af392d938" exitCode=0 Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.080914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" event={"ID":"176e1833-ad7f-40a8-8179-846a546a6fad","Type":"ContainerDied","Data":"2fabc7d166650c6c84cee6b9a8a9314b178b87c1d83b580a9b9ed91af392d938"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.082745 4958 generic.go:334] "Generic (PLEG): container finished" podID="26b187af-f0da-435e-9b8e-89086658d5b1" containerID="059eae096fa1aab6a11b11130f3e4e6724810e4464a6f962abfd26d98b45b36b" exitCode=0 Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.082836 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerDied","Data":"059eae096fa1aab6a11b11130f3e4e6724810e4464a6f962abfd26d98b45b36b"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.088614 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerStarted","Data":"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.098003 4958 generic.go:334] "Generic (PLEG): container finished" podID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerID="7fdffaea75e159fc72cf54d4df5d2f195817b6d725dede699b8c3e70b5f08919" exitCode=0 Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.098060 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerDied","Data":"7fdffaea75e159fc72cf54d4df5d2f195817b6d725dede699b8c3e70b5f08919"} Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.122972 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-24wcs" podStartSLOduration=12.233330992 podStartE2EDuration="17.122946306s" podCreationTimestamp="2025-12-06 05:43:37 +0000 UTC" firstStartedPulling="2025-12-06 05:43:48.008128237 +0000 UTC m=+938.541899000" lastFinishedPulling="2025-12-06 05:43:52.897743551 +0000 UTC m=+943.431514314" observedRunningTime="2025-12-06 05:43:54.106238895 +0000 UTC m=+944.640009668" watchObservedRunningTime="2025-12-06 05:43:54.122946306 +0000 UTC m=+944.656717079" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.405804 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hjnl5"] Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.506518 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.561406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55dbw\" (UniqueName: \"kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw\") pod \"176e1833-ad7f-40a8-8179-846a546a6fad\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.561462 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics\") pod \"176e1833-ad7f-40a8-8179-846a546a6fad\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.561610 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca\") pod \"176e1833-ad7f-40a8-8179-846a546a6fad\" (UID: \"176e1833-ad7f-40a8-8179-846a546a6fad\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.562406 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "176e1833-ad7f-40a8-8179-846a546a6fad" (UID: "176e1833-ad7f-40a8-8179-846a546a6fad"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.567814 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "176e1833-ad7f-40a8-8179-846a546a6fad" (UID: "176e1833-ad7f-40a8-8179-846a546a6fad"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.568280 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw" (OuterVolumeSpecName: "kube-api-access-55dbw") pod "176e1833-ad7f-40a8-8179-846a546a6fad" (UID: "176e1833-ad7f-40a8-8179-846a546a6fad"). InnerVolumeSpecName "kube-api-access-55dbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.584486 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.596457 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.598326 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662218 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz\") pod \"17574228-062b-4c78-a16f-3e8a616e9a37\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662260 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities\") pod \"17574228-062b-4c78-a16f-3e8a616e9a37\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662303 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdrls\" (UniqueName: \"kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls\") pod \"282841fb-36ec-47cf-a371-eeef4e081b4c\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content\") pod \"c35562f3-9b8d-407a-9e1f-d74a17561858\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662390 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities\") pod \"282841fb-36ec-47cf-a371-eeef4e081b4c\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662409 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content\") pod \"282841fb-36ec-47cf-a371-eeef4e081b4c\" (UID: \"282841fb-36ec-47cf-a371-eeef4e081b4c\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662427 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities\") pod \"c35562f3-9b8d-407a-9e1f-d74a17561858\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662461 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content\") pod \"17574228-062b-4c78-a16f-3e8a616e9a37\" (UID: \"17574228-062b-4c78-a16f-3e8a616e9a37\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662508 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8s25\" (UniqueName: \"kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25\") pod \"c35562f3-9b8d-407a-9e1f-d74a17561858\" (UID: \"c35562f3-9b8d-407a-9e1f-d74a17561858\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662709 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662723 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55dbw\" (UniqueName: \"kubernetes.io/projected/176e1833-ad7f-40a8-8179-846a546a6fad-kube-api-access-55dbw\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.662731 4958 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/176e1833-ad7f-40a8-8179-846a546a6fad-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.663004 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities" (OuterVolumeSpecName: "utilities") pod "17574228-062b-4c78-a16f-3e8a616e9a37" (UID: "17574228-062b-4c78-a16f-3e8a616e9a37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.663279 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities" (OuterVolumeSpecName: "utilities") pod "282841fb-36ec-47cf-a371-eeef4e081b4c" (UID: "282841fb-36ec-47cf-a371-eeef4e081b4c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.664239 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities" (OuterVolumeSpecName: "utilities") pod "c35562f3-9b8d-407a-9e1f-d74a17561858" (UID: "c35562f3-9b8d-407a-9e1f-d74a17561858"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.665913 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25" (OuterVolumeSpecName: "kube-api-access-c8s25") pod "c35562f3-9b8d-407a-9e1f-d74a17561858" (UID: "c35562f3-9b8d-407a-9e1f-d74a17561858"). InnerVolumeSpecName "kube-api-access-c8s25". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.669279 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.683748 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz" (OuterVolumeSpecName: "kube-api-access-qpjcz") pod "17574228-062b-4c78-a16f-3e8a616e9a37" (UID: "17574228-062b-4c78-a16f-3e8a616e9a37"). InnerVolumeSpecName "kube-api-access-qpjcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.690615 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls" (OuterVolumeSpecName: "kube-api-access-qdrls") pod "282841fb-36ec-47cf-a371-eeef4e081b4c" (UID: "282841fb-36ec-47cf-a371-eeef4e081b4c"). InnerVolumeSpecName "kube-api-access-qdrls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.730174 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c35562f3-9b8d-407a-9e1f-d74a17561858" (UID: "c35562f3-9b8d-407a-9e1f-d74a17561858"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.745352 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "282841fb-36ec-47cf-a371-eeef4e081b4c" (UID: "282841fb-36ec-47cf-a371-eeef4e081b4c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764183 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764214 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282841fb-36ec-47cf-a371-eeef4e081b4c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764227 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764236 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8s25\" (UniqueName: \"kubernetes.io/projected/c35562f3-9b8d-407a-9e1f-d74a17561858-kube-api-access-c8s25\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764247 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpjcz\" (UniqueName: \"kubernetes.io/projected/17574228-062b-4c78-a16f-3e8a616e9a37-kube-api-access-qpjcz\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764256 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764266 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdrls\" (UniqueName: \"kubernetes.io/projected/282841fb-36ec-47cf-a371-eeef4e081b4c-kube-api-access-qdrls\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.764275 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c35562f3-9b8d-407a-9e1f-d74a17561858-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.794669 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "17574228-062b-4c78-a16f-3e8a616e9a37" (UID: "17574228-062b-4c78-a16f-3e8a616e9a37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.865563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kw4qv\" (UniqueName: \"kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv\") pod \"26b187af-f0da-435e-9b8e-89086658d5b1\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.865724 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities\") pod \"26b187af-f0da-435e-9b8e-89086658d5b1\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.865761 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content\") pod \"26b187af-f0da-435e-9b8e-89086658d5b1\" (UID: \"26b187af-f0da-435e-9b8e-89086658d5b1\") " Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.865927 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/17574228-062b-4c78-a16f-3e8a616e9a37-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.866397 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities" (OuterVolumeSpecName: "utilities") pod "26b187af-f0da-435e-9b8e-89086658d5b1" (UID: "26b187af-f0da-435e-9b8e-89086658d5b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.868533 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv" (OuterVolumeSpecName: "kube-api-access-kw4qv") pod "26b187af-f0da-435e-9b8e-89086658d5b1" (UID: "26b187af-f0da-435e-9b8e-89086658d5b1"). InnerVolumeSpecName "kube-api-access-kw4qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.886182 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26b187af-f0da-435e-9b8e-89086658d5b1" (UID: "26b187af-f0da-435e-9b8e-89086658d5b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.966693 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.966750 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26b187af-f0da-435e-9b8e-89086658d5b1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.966761 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kw4qv\" (UniqueName: \"kubernetes.io/projected/26b187af-f0da-435e-9b8e-89086658d5b1-kube-api-access-kw4qv\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.996641 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.996924 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.996946 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.996960 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.996968 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.996978 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.996986 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.996996 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" containerName="marketplace-operator" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997003 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" containerName="marketplace-operator" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997012 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997020 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997032 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997039 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997051 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997059 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997075 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997084 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997094 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997101 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997111 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997118 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997128 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997136 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="extract-content" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997147 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997155 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="extract-utilities" Dec 06 05:43:54 crc kubenswrapper[4958]: E1206 05:43:54.997167 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997175 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997284 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997301 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997315 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997326 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" containerName="registry-server" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.997337 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" containerName="marketplace-operator" Dec 06 05:43:54 crc kubenswrapper[4958]: I1206 05:43:54.998289 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.009145 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.105612 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6rr8p" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.105624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6rr8p" event={"ID":"c35562f3-9b8d-407a-9e1f-d74a17561858","Type":"ContainerDied","Data":"b3c4527cab9fdeda3594cfa55014678bc4068c5f932d6f66ef310ffcdfc83d88"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.105676 4958 scope.go:117] "RemoveContainer" containerID="7fdffaea75e159fc72cf54d4df5d2f195817b6d725dede699b8c3e70b5f08919" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.107712 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-q2qdm" event={"ID":"17574228-062b-4c78-a16f-3e8a616e9a37","Type":"ContainerDied","Data":"d81ce8dfe5b72047210cb2929833b463387781bdf557d9d3a6e932bca610cd00"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.107850 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-q2qdm" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.111800 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cmpmq" event={"ID":"282841fb-36ec-47cf-a371-eeef4e081b4c","Type":"ContainerDied","Data":"d5a0ff0db92b8ae444fedaaf14fdda09d533243638fe30d341995ed124e123da"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.111833 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cmpmq" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.119854 4958 scope.go:117] "RemoveContainer" containerID="2d5fbb13c46e3edb8cbf4606d74a8d1d9c37962d583145b6f71c2048fa64d6c1" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.120953 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" event={"ID":"fa140104-7b6d-4d2b-a5b8-fba1696d1a94","Type":"ContainerStarted","Data":"2efa0f535dbe7a4763ee054d9f334932e0ebf02868b7ff098b669735335b031e"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.120993 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" event={"ID":"fa140104-7b6d-4d2b-a5b8-fba1696d1a94","Type":"ContainerStarted","Data":"d3417a099c1a705c95d108d341b62d0248f7687a720ce9dbc27771464277162e"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.121203 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.122820 4958 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hjnl5 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.47:8080/healthz\": dial tcp 10.217.0.47:8080: connect: connection refused" start-of-body= Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.122857 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" podUID="fa140104-7b6d-4d2b-a5b8-fba1696d1a94" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.47:8080/healthz\": dial tcp 10.217.0.47:8080: connect: connection refused" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.123141 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.123191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-4l8c2" event={"ID":"176e1833-ad7f-40a8-8179-846a546a6fad","Type":"ContainerDied","Data":"7cc16b7ef75df1a3391f1943b238ba1965ea499f512d877d35ec8ffa0a668896"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.128678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2997j" event={"ID":"26b187af-f0da-435e-9b8e-89086658d5b1","Type":"ContainerDied","Data":"07b851065dc99b10ece293123450ba67860fd7bcf4715d931676b19f42e81bbb"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.128695 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2997j" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.130684 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-24wcs" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="registry-server" containerID="cri-o://d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e" gracePeriod=30 Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.130839 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kqwg7" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="registry-server" containerID="cri-o://241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4" gracePeriod=30 Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.130890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerStarted","Data":"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4"} Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.138532 4958 scope.go:117] "RemoveContainer" containerID="47add315a20787fd20003f5b9636a14abbe4d343a2924183c1480a39f5b46679" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.159354 4958 scope.go:117] "RemoveContainer" containerID="6a4faef06687916309c8f7c808e96a5114f9e19373ff3cefb914dd1085ddfd3f" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.169153 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.169250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhhnb\" (UniqueName: \"kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.169278 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.185726 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" podStartSLOduration=2.185708285 podStartE2EDuration="2.185708285s" podCreationTimestamp="2025-12-06 05:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:43:55.148590406 +0000 UTC m=+945.682361169" watchObservedRunningTime="2025-12-06 05:43:55.185708285 +0000 UTC m=+945.719479058" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.185955 4958 scope.go:117] "RemoveContainer" containerID="6e8b9494c41e3cac3009e3cc0a8771460153d6bdb4ab5dbe64fbbc6ca2d88f3b" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.185985 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kqwg7" podStartSLOduration=4.251193376 podStartE2EDuration="9.185980313s" podCreationTimestamp="2025-12-06 05:43:46 +0000 UTC" firstStartedPulling="2025-12-06 05:43:49.025130298 +0000 UTC m=+939.558901061" lastFinishedPulling="2025-12-06 05:43:53.959917235 +0000 UTC m=+944.493687998" observedRunningTime="2025-12-06 05:43:55.179547933 +0000 UTC m=+945.713318706" watchObservedRunningTime="2025-12-06 05:43:55.185980313 +0000 UTC m=+945.719751076" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.204243 4958 scope.go:117] "RemoveContainer" containerID="4c5e4699e10587143850e321eb64306e828bac64a3b23ee27ee21f2bf6a4737c" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.210810 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.226161 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-q2qdm"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.231314 4958 scope.go:117] "RemoveContainer" containerID="2aac1c9125d944ed8a33154ede921d66e68daf809e990e27176f483eb75cae20" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.240187 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.243556 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2997j"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.247582 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.251192 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-4l8c2"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.255080 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.259023 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cmpmq"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.262413 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.265602 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6rr8p"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.270496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhhnb\" (UniqueName: \"kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.270542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.270631 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.271189 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.271330 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.289273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhhnb\" (UniqueName: \"kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb\") pod \"certified-operators-stm6x\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.314618 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.394686 4958 scope.go:117] "RemoveContainer" containerID="932d625d90d81751a9e0ce72126db56df715366c2296dd8586ea3344b1545aaa" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.423592 4958 scope.go:117] "RemoveContainer" containerID="596dbb004cbf8402ed7fa9c7e9d1da4579e35fbcb93237b981537fb7d7059d4e" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.459828 4958 scope.go:117] "RemoveContainer" containerID="2fabc7d166650c6c84cee6b9a8a9314b178b87c1d83b580a9b9ed91af392d938" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.481375 4958 scope.go:117] "RemoveContainer" containerID="059eae096fa1aab6a11b11130f3e4e6724810e4464a6f962abfd26d98b45b36b" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.503569 4958 scope.go:117] "RemoveContainer" containerID="c06100eae0775e2d00b9fa58df1be9fcb66bb99d8c57552e510ce18864ccac8c" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.524055 4958 scope.go:117] "RemoveContainer" containerID="5c1a51d1a4745d924329cbe9adf744a91196a57d3755fc44e6d0d4bf5b03ee69" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.525891 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.573151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities\") pod \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.573245 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content\") pod \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.573294 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8v7hj\" (UniqueName: \"kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj\") pod \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\" (UID: \"6d81967c-07a4-4a5c-b22e-b3dab71214dc\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.574218 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities" (OuterVolumeSpecName: "utilities") pod "6d81967c-07a4-4a5c-b22e-b3dab71214dc" (UID: "6d81967c-07a4-4a5c-b22e-b3dab71214dc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.576800 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kqwg7_a9a832eb-1b05-41af-b440-ab9488c618d3/registry-server/0.log" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.577451 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.577831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj" (OuterVolumeSpecName: "kube-api-access-8v7hj") pod "6d81967c-07a4-4a5c-b22e-b3dab71214dc" (UID: "6d81967c-07a4-4a5c-b22e-b3dab71214dc"). InnerVolumeSpecName "kube-api-access-8v7hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.577963 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.603124 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d81967c-07a4-4a5c-b22e-b3dab71214dc" (UID: "6d81967c-07a4-4a5c-b22e-b3dab71214dc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.674708 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.674778 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d81967c-07a4-4a5c-b22e-b3dab71214dc-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.674807 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8v7hj\" (UniqueName: \"kubernetes.io/projected/6d81967c-07a4-4a5c-b22e-b3dab71214dc-kube-api-access-8v7hj\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.769715 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17574228-062b-4c78-a16f-3e8a616e9a37" path="/var/lib/kubelet/pods/17574228-062b-4c78-a16f-3e8a616e9a37/volumes" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.770327 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="176e1833-ad7f-40a8-8179-846a546a6fad" path="/var/lib/kubelet/pods/176e1833-ad7f-40a8-8179-846a546a6fad/volumes" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.770797 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26b187af-f0da-435e-9b8e-89086658d5b1" path="/var/lib/kubelet/pods/26b187af-f0da-435e-9b8e-89086658d5b1/volumes" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.771773 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282841fb-36ec-47cf-a371-eeef4e081b4c" path="/var/lib/kubelet/pods/282841fb-36ec-47cf-a371-eeef4e081b4c/volumes" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.772346 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c35562f3-9b8d-407a-9e1f-d74a17561858" path="/var/lib/kubelet/pods/c35562f3-9b8d-407a-9e1f-d74a17561858/volumes" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.775524 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qk6q\" (UniqueName: \"kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q\") pod \"a9a832eb-1b05-41af-b440-ab9488c618d3\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.775591 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content\") pod \"a9a832eb-1b05-41af-b440-ab9488c618d3\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.775636 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities\") pod \"a9a832eb-1b05-41af-b440-ab9488c618d3\" (UID: \"a9a832eb-1b05-41af-b440-ab9488c618d3\") " Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.776441 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities" (OuterVolumeSpecName: "utilities") pod "a9a832eb-1b05-41af-b440-ab9488c618d3" (UID: "a9a832eb-1b05-41af-b440-ab9488c618d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.778120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q" (OuterVolumeSpecName: "kube-api-access-4qk6q") pod "a9a832eb-1b05-41af-b440-ab9488c618d3" (UID: "a9a832eb-1b05-41af-b440-ab9488c618d3"). InnerVolumeSpecName "kube-api-access-4qk6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.828420 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9a832eb-1b05-41af-b440-ab9488c618d3" (UID: "a9a832eb-1b05-41af-b440-ab9488c618d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.876799 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.876830 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9a832eb-1b05-41af-b440-ab9488c618d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:55 crc kubenswrapper[4958]: I1206 05:43:55.876840 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qk6q\" (UniqueName: \"kubernetes.io/projected/a9a832eb-1b05-41af-b440-ab9488c618d3-kube-api-access-4qk6q\") on node \"crc\" DevicePath \"\"" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.137672 4958 generic.go:334] "Generic (PLEG): container finished" podID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerID="d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e" exitCode=0 Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.137770 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerDied","Data":"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.137827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-24wcs" event={"ID":"6d81967c-07a4-4a5c-b22e-b3dab71214dc","Type":"ContainerDied","Data":"ce3626785623c6fd41b0e05d21502412c81f41c83e5d2c9f60daf9967cb18b7e"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.137848 4958 scope.go:117] "RemoveContainer" containerID="d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.138040 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-24wcs" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.141494 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-kqwg7_a9a832eb-1b05-41af-b440-ab9488c618d3/registry-server/0.log" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.142372 4958 generic.go:334] "Generic (PLEG): container finished" podID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerID="241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4" exitCode=1 Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.142593 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kqwg7" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.143799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerDied","Data":"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.143919 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kqwg7" event={"ID":"a9a832eb-1b05-41af-b440-ab9488c618d3","Type":"ContainerDied","Data":"023b0dab9a71bc1f1ef6b392e773e0905fd52d5c5c5285a1ea909a4da4e2ba9b"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.149869 4958 generic.go:334] "Generic (PLEG): container finished" podID="281467fb-46c7-4fb5-89b1-93f25927d462" containerID="a9118ff9f8f605f2abe6550a2021ffd49b0e93012cbdcf8f7d7529d862e219b3" exitCode=0 Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.150097 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerDied","Data":"a9118ff9f8f605f2abe6550a2021ffd49b0e93012cbdcf8f7d7529d862e219b3"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.150142 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerStarted","Data":"4cca9f58ef923a81de1d021c21fea2e5ff599fa7500ede14382c836af8269ebe"} Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.158718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hjnl5" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.208884 4958 scope.go:117] "RemoveContainer" containerID="f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.214342 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.219162 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-24wcs"] Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.225237 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.228933 4958 scope.go:117] "RemoveContainer" containerID="3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.237854 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kqwg7"] Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.246435 4958 scope.go:117] "RemoveContainer" containerID="d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.246812 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e\": container with ID starting with d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e not found: ID does not exist" containerID="d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.246843 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e"} err="failed to get container status \"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e\": rpc error: code = NotFound desc = could not find container \"d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e\": container with ID starting with d29eef28f992a674c15cc758fe73e6b376c095e6c08d607a4c6a9be315881f2e not found: ID does not exist" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.246865 4958 scope.go:117] "RemoveContainer" containerID="f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.247166 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a\": container with ID starting with f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a not found: ID does not exist" containerID="f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.247208 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a"} err="failed to get container status \"f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a\": rpc error: code = NotFound desc = could not find container \"f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a\": container with ID starting with f7b13d596305236c9956ba13a0d3dbb77beb984ffff388be588ef001f0bd432a not found: ID does not exist" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.247291 4958 scope.go:117] "RemoveContainer" containerID="3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.247603 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d\": container with ID starting with 3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d not found: ID does not exist" containerID="3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.247632 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d"} err="failed to get container status \"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d\": rpc error: code = NotFound desc = could not find container \"3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d\": container with ID starting with 3c32b0c9e9956b72b727893c84c2c67ca8bab4cd69cbbd3629e861da1d637a7d not found: ID does not exist" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.247649 4958 scope.go:117] "RemoveContainer" containerID="241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.263069 4958 scope.go:117] "RemoveContainer" containerID="14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.276898 4958 scope.go:117] "RemoveContainer" containerID="4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.297357 4958 scope.go:117] "RemoveContainer" containerID="241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.297840 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4\": container with ID starting with 241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4 not found: ID does not exist" containerID="241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.297879 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4"} err="failed to get container status \"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4\": rpc error: code = NotFound desc = could not find container \"241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4\": container with ID starting with 241f0d07e6f0e333ecefb574cd7f697d31a09a647c052a7151e936c4826560f4 not found: ID does not exist" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.297906 4958 scope.go:117] "RemoveContainer" containerID="14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.298149 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7\": container with ID starting with 14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7 not found: ID does not exist" containerID="14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.298177 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7"} err="failed to get container status \"14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7\": rpc error: code = NotFound desc = could not find container \"14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7\": container with ID starting with 14c6891468bbdca3645d1ebf0aed24adef987fa7d2ae1cd0f01b3d7509355cf7 not found: ID does not exist" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.298196 4958 scope.go:117] "RemoveContainer" containerID="4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6" Dec 06 05:43:56 crc kubenswrapper[4958]: E1206 05:43:56.298464 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6\": container with ID starting with 4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6 not found: ID does not exist" containerID="4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6" Dec 06 05:43:56 crc kubenswrapper[4958]: I1206 05:43:56.298511 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6"} err="failed to get container status \"4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6\": rpc error: code = NotFound desc = could not find container \"4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6\": container with ID starting with 4136f6bc31707dcce48601067d1a0a105355978ed67bd44d56707064649c83e6 not found: ID does not exist" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.404976 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6d5bb"] Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405555 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405569 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405582 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="extract-content" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405588 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="extract-content" Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405599 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="extract-utilities" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405605 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="extract-utilities" Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405612 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405619 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405633 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="extract-content" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405639 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="extract-content" Dec 06 05:43:57 crc kubenswrapper[4958]: E1206 05:43:57.405648 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="extract-utilities" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405654 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="extract-utilities" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405745 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.405757 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" containerName="registry-server" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.406902 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.408896 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.424489 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6d5bb"] Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.500665 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjmk8\" (UniqueName: \"kubernetes.io/projected/c879cca7-40bf-4150-bed9-df7dabb7e037-kube-api-access-qjmk8\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.500757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-utilities\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.500814 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-catalog-content\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.596777 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9hcvw"] Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.597850 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.599483 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601390 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsb99\" (UniqueName: \"kubernetes.io/projected/e580e362-f544-46ac-9b5f-0bb097d87a41-kube-api-access-zsb99\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601440 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-utilities\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601469 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-utilities\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-catalog-content\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-catalog-content\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.601576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjmk8\" (UniqueName: \"kubernetes.io/projected/c879cca7-40bf-4150-bed9-df7dabb7e037-kube-api-access-qjmk8\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.602287 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-utilities\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.602303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c879cca7-40bf-4150-bed9-df7dabb7e037-catalog-content\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.610702 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hcvw"] Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.628653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjmk8\" (UniqueName: \"kubernetes.io/projected/c879cca7-40bf-4150-bed9-df7dabb7e037-kube-api-access-qjmk8\") pod \"redhat-marketplace-6d5bb\" (UID: \"c879cca7-40bf-4150-bed9-df7dabb7e037\") " pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.702909 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsb99\" (UniqueName: \"kubernetes.io/projected/e580e362-f544-46ac-9b5f-0bb097d87a41-kube-api-access-zsb99\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.702957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-utilities\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.702990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-catalog-content\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.703599 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-catalog-content\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.704123 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e580e362-f544-46ac-9b5f-0bb097d87a41-utilities\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.718874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsb99\" (UniqueName: \"kubernetes.io/projected/e580e362-f544-46ac-9b5f-0bb097d87a41-kube-api-access-zsb99\") pod \"redhat-operators-9hcvw\" (UID: \"e580e362-f544-46ac-9b5f-0bb097d87a41\") " pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.731075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.771589 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d81967c-07a4-4a5c-b22e-b3dab71214dc" path="/var/lib/kubelet/pods/6d81967c-07a4-4a5c-b22e-b3dab71214dc/volumes" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.772185 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9a832eb-1b05-41af-b440-ab9488c618d3" path="/var/lib/kubelet/pods/a9a832eb-1b05-41af-b440-ab9488c618d3/volumes" Dec 06 05:43:57 crc kubenswrapper[4958]: I1206 05:43:57.947379 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:43:58 crc kubenswrapper[4958]: I1206 05:43:58.121607 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6d5bb"] Dec 06 05:43:58 crc kubenswrapper[4958]: W1206 05:43:58.125609 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc879cca7_40bf_4150_bed9_df7dabb7e037.slice/crio-0b423eb45d3e178a2568f8ac3bd2ea0958b67752f9c3982401b79496b3295807 WatchSource:0}: Error finding container 0b423eb45d3e178a2568f8ac3bd2ea0958b67752f9c3982401b79496b3295807: Status 404 returned error can't find the container with id 0b423eb45d3e178a2568f8ac3bd2ea0958b67752f9c3982401b79496b3295807 Dec 06 05:43:58 crc kubenswrapper[4958]: I1206 05:43:58.169737 4958 generic.go:334] "Generic (PLEG): container finished" podID="281467fb-46c7-4fb5-89b1-93f25927d462" containerID="5a6e33b6c7391d9a62cd204cc7553e2854d0d3b28fd15b5dde89a92ad3608fcd" exitCode=0 Dec 06 05:43:58 crc kubenswrapper[4958]: I1206 05:43:58.169799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerDied","Data":"5a6e33b6c7391d9a62cd204cc7553e2854d0d3b28fd15b5dde89a92ad3608fcd"} Dec 06 05:43:58 crc kubenswrapper[4958]: I1206 05:43:58.172277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6d5bb" event={"ID":"c879cca7-40bf-4150-bed9-df7dabb7e037","Type":"ContainerStarted","Data":"0b423eb45d3e178a2568f8ac3bd2ea0958b67752f9c3982401b79496b3295807"} Dec 06 05:43:58 crc kubenswrapper[4958]: I1206 05:43:58.320683 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9hcvw"] Dec 06 05:43:58 crc kubenswrapper[4958]: W1206 05:43:58.324662 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode580e362_f544_46ac_9b5f_0bb097d87a41.slice/crio-c8c073fdb8c9377e7e7c14a560f5a61ae4b1e9a456bd5139363ee865c7c92f74 WatchSource:0}: Error finding container c8c073fdb8c9377e7e7c14a560f5a61ae4b1e9a456bd5139363ee865c7c92f74: Status 404 returned error can't find the container with id c8c073fdb8c9377e7e7c14a560f5a61ae4b1e9a456bd5139363ee865c7c92f74 Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.182814 4958 generic.go:334] "Generic (PLEG): container finished" podID="c879cca7-40bf-4150-bed9-df7dabb7e037" containerID="b5cf791dce7d8a3d1deec86789d14fc36dac18295952260fdfeebb31d17b3dba" exitCode=0 Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.182923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6d5bb" event={"ID":"c879cca7-40bf-4150-bed9-df7dabb7e037","Type":"ContainerDied","Data":"b5cf791dce7d8a3d1deec86789d14fc36dac18295952260fdfeebb31d17b3dba"} Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.185069 4958 generic.go:334] "Generic (PLEG): container finished" podID="e580e362-f544-46ac-9b5f-0bb097d87a41" containerID="625b740d969115f149f322a011554215bdefcd6c2e690ceeb42941f5cb5aea7c" exitCode=0 Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.185140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hcvw" event={"ID":"e580e362-f544-46ac-9b5f-0bb097d87a41","Type":"ContainerDied","Data":"625b740d969115f149f322a011554215bdefcd6c2e690ceeb42941f5cb5aea7c"} Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.185182 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hcvw" event={"ID":"e580e362-f544-46ac-9b5f-0bb097d87a41","Type":"ContainerStarted","Data":"c8c073fdb8c9377e7e7c14a560f5a61ae4b1e9a456bd5139363ee865c7c92f74"} Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.795483 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fcnxn"] Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.796771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.805585 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcnxn"] Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.934623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-utilities\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.934671 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-catalog-content\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.934701 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc4hz\" (UniqueName: \"kubernetes.io/projected/3d0b5996-766b-4ced-9981-d56da88885bc-kube-api-access-mc4hz\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.993627 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.995041 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:43:59 crc kubenswrapper[4958]: I1206 05:43:59.997995 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.009772 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc4hz\" (UniqueName: \"kubernetes.io/projected/3d0b5996-766b-4ced-9981-d56da88885bc-kube-api-access-mc4hz\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036395 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036517 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-catalog-content\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036553 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7btj\" (UniqueName: \"kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-utilities\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.036962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-catalog-content\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.037061 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d0b5996-766b-4ced-9981-d56da88885bc-utilities\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.057965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc4hz\" (UniqueName: \"kubernetes.io/projected/3d0b5996-766b-4ced-9981-d56da88885bc-kube-api-access-mc4hz\") pod \"certified-operators-fcnxn\" (UID: \"3d0b5996-766b-4ced-9981-d56da88885bc\") " pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.127628 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.137379 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.137438 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7btj\" (UniqueName: \"kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.137657 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.137874 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.137951 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.156667 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7btj\" (UniqueName: \"kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj\") pod \"community-operators-bhln7\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.192637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerStarted","Data":"e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3"} Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.212957 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-stm6x" podStartSLOduration=3.254346254 podStartE2EDuration="6.212936922s" podCreationTimestamp="2025-12-06 05:43:54 +0000 UTC" firstStartedPulling="2025-12-06 05:43:56.151017674 +0000 UTC m=+946.684788437" lastFinishedPulling="2025-12-06 05:43:59.109608342 +0000 UTC m=+949.643379105" observedRunningTime="2025-12-06 05:44:00.209537641 +0000 UTC m=+950.743308434" watchObservedRunningTime="2025-12-06 05:44:00.212936922 +0000 UTC m=+950.746707685" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.308931 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.427025 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fcnxn"] Dec 06 05:44:00 crc kubenswrapper[4958]: W1206 05:44:00.437725 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d0b5996_766b_4ced_9981_d56da88885bc.slice/crio-b82c72973d01ac3a497124ca811f3f4c939a3d1d64c687cf1be92d1d3a490b98 WatchSource:0}: Error finding container b82c72973d01ac3a497124ca811f3f4c939a3d1d64c687cf1be92d1d3a490b98: Status 404 returned error can't find the container with id b82c72973d01ac3a497124ca811f3f4c939a3d1d64c687cf1be92d1d3a490b98 Dec 06 05:44:00 crc kubenswrapper[4958]: I1206 05:44:00.533318 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:44:00 crc kubenswrapper[4958]: W1206 05:44:00.653088 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dcd0eed_120b_4aec_bb22_3fabed037aae.slice/crio-8b801e410acb5299fd508e1fd0377f0a596551b96c24df4f5d1fe0fee6814d07 WatchSource:0}: Error finding container 8b801e410acb5299fd508e1fd0377f0a596551b96c24df4f5d1fe0fee6814d07: Status 404 returned error can't find the container with id 8b801e410acb5299fd508e1fd0377f0a596551b96c24df4f5d1fe0fee6814d07 Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.199320 4958 generic.go:334] "Generic (PLEG): container finished" podID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerID="abbcca7cb91bb63abb0753e3efbb68d551e25621c88e30d5fba93512d2fbb043" exitCode=0 Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.199672 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerDied","Data":"abbcca7cb91bb63abb0753e3efbb68d551e25621c88e30d5fba93512d2fbb043"} Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.199700 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerStarted","Data":"8b801e410acb5299fd508e1fd0377f0a596551b96c24df4f5d1fe0fee6814d07"} Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.205288 4958 generic.go:334] "Generic (PLEG): container finished" podID="e580e362-f544-46ac-9b5f-0bb097d87a41" containerID="564bb33f1cde0a71395985d69aa6a295daaa1d6f322c6811def51257e1b3b566" exitCode=0 Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.205330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hcvw" event={"ID":"e580e362-f544-46ac-9b5f-0bb097d87a41","Type":"ContainerDied","Data":"564bb33f1cde0a71395985d69aa6a295daaa1d6f322c6811def51257e1b3b566"} Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.211758 4958 generic.go:334] "Generic (PLEG): container finished" podID="3d0b5996-766b-4ced-9981-d56da88885bc" containerID="d2acec8ccc57bb168356eafb68f0f3e61f96c4a00245c70ed04ff2567f305d8b" exitCode=0 Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.211877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcnxn" event={"ID":"3d0b5996-766b-4ced-9981-d56da88885bc","Type":"ContainerDied","Data":"d2acec8ccc57bb168356eafb68f0f3e61f96c4a00245c70ed04ff2567f305d8b"} Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.211910 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcnxn" event={"ID":"3d0b5996-766b-4ced-9981-d56da88885bc","Type":"ContainerStarted","Data":"b82c72973d01ac3a497124ca811f3f4c939a3d1d64c687cf1be92d1d3a490b98"} Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.216645 4958 generic.go:334] "Generic (PLEG): container finished" podID="c879cca7-40bf-4150-bed9-df7dabb7e037" containerID="cc4a1e5400c36294f50f3175a39cbf3664bf627dea4900dc4d7333a78832eefc" exitCode=0 Dec 06 05:44:01 crc kubenswrapper[4958]: I1206 05:44:01.216892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6d5bb" event={"ID":"c879cca7-40bf-4150-bed9-df7dabb7e037","Type":"ContainerDied","Data":"cc4a1e5400c36294f50f3175a39cbf3664bf627dea4900dc4d7333a78832eefc"} Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.227944 4958 generic.go:334] "Generic (PLEG): container finished" podID="3d0b5996-766b-4ced-9981-d56da88885bc" containerID="aac1ae8b3fd6f42ebd3aa53cd395273f9059537ea816cf9da887de13f7d11eb5" exitCode=0 Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.228042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcnxn" event={"ID":"3d0b5996-766b-4ced-9981-d56da88885bc","Type":"ContainerDied","Data":"aac1ae8b3fd6f42ebd3aa53cd395273f9059537ea816cf9da887de13f7d11eb5"} Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.232729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6d5bb" event={"ID":"c879cca7-40bf-4150-bed9-df7dabb7e037","Type":"ContainerStarted","Data":"04998c3e29d8eb4799e16a1fac333d5a87f507c0965dee8aa83bc76a76291718"} Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.234584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerStarted","Data":"7067492963bd2e3a9cf70633b2a1e0286921c8ba6a95cbfafe95a12742c8145e"} Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.236518 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9hcvw" event={"ID":"e580e362-f544-46ac-9b5f-0bb097d87a41","Type":"ContainerStarted","Data":"eedf54f9f76ae38d3d937e85d55fdd94cc69482464a7c156f07e3cb282d48f34"} Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.262117 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6d5bb" podStartSLOduration=3.116696823 podStartE2EDuration="6.262098019s" podCreationTimestamp="2025-12-06 05:43:57 +0000 UTC" firstStartedPulling="2025-12-06 05:43:59.184096207 +0000 UTC m=+949.717866970" lastFinishedPulling="2025-12-06 05:44:02.329497393 +0000 UTC m=+952.863268166" observedRunningTime="2025-12-06 05:44:03.260061275 +0000 UTC m=+953.793832048" watchObservedRunningTime="2025-12-06 05:44:03.262098019 +0000 UTC m=+953.795868782" Dec 06 05:44:03 crc kubenswrapper[4958]: I1206 05:44:03.295638 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9hcvw" podStartSLOduration=3.322910304 podStartE2EDuration="6.295620874s" podCreationTimestamp="2025-12-06 05:43:57 +0000 UTC" firstStartedPulling="2025-12-06 05:43:59.186985034 +0000 UTC m=+949.720755797" lastFinishedPulling="2025-12-06 05:44:02.159695604 +0000 UTC m=+952.693466367" observedRunningTime="2025-12-06 05:44:03.292921732 +0000 UTC m=+953.826692515" watchObservedRunningTime="2025-12-06 05:44:03.295620874 +0000 UTC m=+953.829391637" Dec 06 05:44:04 crc kubenswrapper[4958]: I1206 05:44:04.244537 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fcnxn" event={"ID":"3d0b5996-766b-4ced-9981-d56da88885bc","Type":"ContainerStarted","Data":"d06f8cfb74b52b6ae1ea208ccf14761092c7abcdf3113554b57ec8bdaff252b3"} Dec 06 05:44:04 crc kubenswrapper[4958]: I1206 05:44:04.247601 4958 generic.go:334] "Generic (PLEG): container finished" podID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerID="7067492963bd2e3a9cf70633b2a1e0286921c8ba6a95cbfafe95a12742c8145e" exitCode=0 Dec 06 05:44:04 crc kubenswrapper[4958]: I1206 05:44:04.248541 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerDied","Data":"7067492963bd2e3a9cf70633b2a1e0286921c8ba6a95cbfafe95a12742c8145e"} Dec 06 05:44:04 crc kubenswrapper[4958]: I1206 05:44:04.266080 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fcnxn" podStartSLOduration=2.611296655 podStartE2EDuration="5.266059137s" podCreationTimestamp="2025-12-06 05:43:59 +0000 UTC" firstStartedPulling="2025-12-06 05:44:01.214633199 +0000 UTC m=+951.748403962" lastFinishedPulling="2025-12-06 05:44:03.869395681 +0000 UTC m=+954.403166444" observedRunningTime="2025-12-06 05:44:04.263844618 +0000 UTC m=+954.797615381" watchObservedRunningTime="2025-12-06 05:44:04.266059137 +0000 UTC m=+954.799829900" Dec 06 05:44:05 crc kubenswrapper[4958]: I1206 05:44:05.256102 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerStarted","Data":"539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10"} Dec 06 05:44:05 crc kubenswrapper[4958]: I1206 05:44:05.278703 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bhln7" podStartSLOduration=2.777729347 podStartE2EDuration="6.278680043s" podCreationTimestamp="2025-12-06 05:43:59 +0000 UTC" firstStartedPulling="2025-12-06 05:44:01.201441972 +0000 UTC m=+951.735212725" lastFinishedPulling="2025-12-06 05:44:04.702392668 +0000 UTC m=+955.236163421" observedRunningTime="2025-12-06 05:44:05.273705271 +0000 UTC m=+955.807476034" watchObservedRunningTime="2025-12-06 05:44:05.278680043 +0000 UTC m=+955.812450806" Dec 06 05:44:05 crc kubenswrapper[4958]: I1206 05:44:05.315075 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:05 crc kubenswrapper[4958]: I1206 05:44:05.315128 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:05 crc kubenswrapper[4958]: I1206 05:44:05.351930 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:06 crc kubenswrapper[4958]: I1206 05:44:06.319853 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.731586 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.732085 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.775997 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.948430 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.949389 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:44:07 crc kubenswrapper[4958]: I1206 05:44:07.987491 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:44:08 crc kubenswrapper[4958]: I1206 05:44:08.186355 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:44:08 crc kubenswrapper[4958]: I1206 05:44:08.285910 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-stm6x" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="registry-server" containerID="cri-o://e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" gracePeriod=2 Dec 06 05:44:08 crc kubenswrapper[4958]: I1206 05:44:08.321482 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6d5bb" Dec 06 05:44:08 crc kubenswrapper[4958]: I1206 05:44:08.322747 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9hcvw" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.128169 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.128608 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.173066 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.310390 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.310438 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.341286 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fcnxn" Dec 06 05:44:10 crc kubenswrapper[4958]: I1206 05:44:10.355667 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:11 crc kubenswrapper[4958]: I1206 05:44:11.358618 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:44:15 crc kubenswrapper[4958]: I1206 05:44:15.005155 4958 generic.go:334] "Generic (PLEG): container finished" podID="281467fb-46c7-4fb5-89b1-93f25927d462" containerID="e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" exitCode=0 Dec 06 05:44:15 crc kubenswrapper[4958]: I1206 05:44:15.005237 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerDied","Data":"e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3"} Dec 06 05:44:15 crc kubenswrapper[4958]: E1206 05:44:15.334223 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3 is running failed: container process not found" containerID="e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:44:15 crc kubenswrapper[4958]: E1206 05:44:15.335192 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3 is running failed: container process not found" containerID="e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:44:15 crc kubenswrapper[4958]: E1206 05:44:15.335610 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3 is running failed: container process not found" containerID="e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:44:15 crc kubenswrapper[4958]: E1206 05:44:15.335699 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-stm6x" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="registry-server" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.074907 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.241903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content\") pod \"281467fb-46c7-4fb5-89b1-93f25927d462\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.241975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities\") pod \"281467fb-46c7-4fb5-89b1-93f25927d462\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.242089 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bhhnb\" (UniqueName: \"kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb\") pod \"281467fb-46c7-4fb5-89b1-93f25927d462\" (UID: \"281467fb-46c7-4fb5-89b1-93f25927d462\") " Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.243133 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities" (OuterVolumeSpecName: "utilities") pod "281467fb-46c7-4fb5-89b1-93f25927d462" (UID: "281467fb-46c7-4fb5-89b1-93f25927d462"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.248296 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb" (OuterVolumeSpecName: "kube-api-access-bhhnb") pod "281467fb-46c7-4fb5-89b1-93f25927d462" (UID: "281467fb-46c7-4fb5-89b1-93f25927d462"). InnerVolumeSpecName "kube-api-access-bhhnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.292779 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "281467fb-46c7-4fb5-89b1-93f25927d462" (UID: "281467fb-46c7-4fb5-89b1-93f25927d462"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.343765 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bhhnb\" (UniqueName: \"kubernetes.io/projected/281467fb-46c7-4fb5-89b1-93f25927d462-kube-api-access-bhhnb\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.343803 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:17 crc kubenswrapper[4958]: I1206 05:44:17.343813 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/281467fb-46c7-4fb5-89b1-93f25927d462-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.025514 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-stm6x" event={"ID":"281467fb-46c7-4fb5-89b1-93f25927d462","Type":"ContainerDied","Data":"4cca9f58ef923a81de1d021c21fea2e5ff599fa7500ede14382c836af8269ebe"} Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.025572 4958 scope.go:117] "RemoveContainer" containerID="e856ea335756cf7f27d23da2813e27ea96e982609be1325dfde40632e76c14a3" Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.025623 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-stm6x" Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.042779 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.046345 4958 scope.go:117] "RemoveContainer" containerID="5a6e33b6c7391d9a62cd204cc7553e2854d0d3b28fd15b5dde89a92ad3608fcd" Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.047774 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-stm6x"] Dec 06 05:44:18 crc kubenswrapper[4958]: I1206 05:44:18.063339 4958 scope.go:117] "RemoveContainer" containerID="a9118ff9f8f605f2abe6550a2021ffd49b0e93012cbdcf8f7d7529d862e219b3" Dec 06 05:44:19 crc kubenswrapper[4958]: I1206 05:44:19.772981 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" path="/var/lib/kubelet/pods/281467fb-46c7-4fb5-89b1-93f25927d462/volumes" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.655561 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99"] Dec 06 05:44:20 crc kubenswrapper[4958]: E1206 05:44:20.655786 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="extract-utilities" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.655799 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="extract-utilities" Dec 06 05:44:20 crc kubenswrapper[4958]: E1206 05:44:20.655824 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="registry-server" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.655831 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="registry-server" Dec 06 05:44:20 crc kubenswrapper[4958]: E1206 05:44:20.655842 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="extract-content" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.655850 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="extract-content" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.655954 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="281467fb-46c7-4fb5-89b1-93f25927d462" containerName="registry-server" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.656802 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.658513 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.666213 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99"] Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.786944 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.787008 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.787125 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hv69\" (UniqueName: \"kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.888308 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.888381 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.888457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hv69\" (UniqueName: \"kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.888812 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.888884 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.910980 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hv69\" (UniqueName: \"kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:20 crc kubenswrapper[4958]: I1206 05:44:20.971694 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:21 crc kubenswrapper[4958]: I1206 05:44:21.416936 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99"] Dec 06 05:44:22 crc kubenswrapper[4958]: I1206 05:44:22.050524 4958 generic.go:334] "Generic (PLEG): container finished" podID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerID="e875a614303d68e5642414778eb8f04db55dd5569600ad78fdfdbfd8c48dd2d3" exitCode=0 Dec 06 05:44:22 crc kubenswrapper[4958]: I1206 05:44:22.050772 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerDied","Data":"e875a614303d68e5642414778eb8f04db55dd5569600ad78fdfdbfd8c48dd2d3"} Dec 06 05:44:22 crc kubenswrapper[4958]: I1206 05:44:22.050798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerStarted","Data":"094ec89eb8caefc996a1eea21a49990900758e7fd10cd18d262f0b66bf54a8c3"} Dec 06 05:44:29 crc kubenswrapper[4958]: I1206 05:44:29.095362 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerStarted","Data":"35ec21c813cb475aa865bed35cb377dc1210579048e955dfb1687557f209b7ff"} Dec 06 05:44:30 crc kubenswrapper[4958]: I1206 05:44:30.104240 4958 generic.go:334] "Generic (PLEG): container finished" podID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerID="35ec21c813cb475aa865bed35cb377dc1210579048e955dfb1687557f209b7ff" exitCode=0 Dec 06 05:44:30 crc kubenswrapper[4958]: I1206 05:44:30.104315 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerDied","Data":"35ec21c813cb475aa865bed35cb377dc1210579048e955dfb1687557f209b7ff"} Dec 06 05:44:31 crc kubenswrapper[4958]: I1206 05:44:31.113528 4958 generic.go:334] "Generic (PLEG): container finished" podID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerID="d8c3ae455b2405e54d967a37723779ec3f0f0cf6469ae32b7c8e61b2cbc2f003" exitCode=0 Dec 06 05:44:31 crc kubenswrapper[4958]: I1206 05:44:31.113572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerDied","Data":"d8c3ae455b2405e54d967a37723779ec3f0f0cf6469ae32b7c8e61b2cbc2f003"} Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.409671 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.551257 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util\") pod \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.551362 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hv69\" (UniqueName: \"kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69\") pod \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.551405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle\") pod \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\" (UID: \"415f7755-d8b2-4eea-a307-03e0b7ca4d95\") " Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.552582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle" (OuterVolumeSpecName: "bundle") pod "415f7755-d8b2-4eea-a307-03e0b7ca4d95" (UID: "415f7755-d8b2-4eea-a307-03e0b7ca4d95"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.561263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69" (OuterVolumeSpecName: "kube-api-access-6hv69") pod "415f7755-d8b2-4eea-a307-03e0b7ca4d95" (UID: "415f7755-d8b2-4eea-a307-03e0b7ca4d95"). InnerVolumeSpecName "kube-api-access-6hv69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.575073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util" (OuterVolumeSpecName: "util") pod "415f7755-d8b2-4eea-a307-03e0b7ca4d95" (UID: "415f7755-d8b2-4eea-a307-03e0b7ca4d95"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.653282 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-util\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.653320 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6hv69\" (UniqueName: \"kubernetes.io/projected/415f7755-d8b2-4eea-a307-03e0b7ca4d95-kube-api-access-6hv69\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:32 crc kubenswrapper[4958]: I1206 05:44:32.653335 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/415f7755-d8b2-4eea-a307-03e0b7ca4d95-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:44:33 crc kubenswrapper[4958]: I1206 05:44:33.127749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" event={"ID":"415f7755-d8b2-4eea-a307-03e0b7ca4d95","Type":"ContainerDied","Data":"094ec89eb8caefc996a1eea21a49990900758e7fd10cd18d262f0b66bf54a8c3"} Dec 06 05:44:33 crc kubenswrapper[4958]: I1206 05:44:33.127788 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="094ec89eb8caefc996a1eea21a49990900758e7fd10cd18d262f0b66bf54a8c3" Dec 06 05:44:33 crc kubenswrapper[4958]: I1206 05:44:33.127857 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.237220 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7"] Dec 06 05:44:37 crc kubenswrapper[4958]: E1206 05:44:37.237939 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="pull" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.237951 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="pull" Dec 06 05:44:37 crc kubenswrapper[4958]: E1206 05:44:37.237960 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="util" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.237965 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="util" Dec 06 05:44:37 crc kubenswrapper[4958]: E1206 05:44:37.237980 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="extract" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.237986 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="extract" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.238082 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="415f7755-d8b2-4eea-a307-03e0b7ca4d95" containerName="extract" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.238552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.239963 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.240156 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-tk24b" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.240989 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.255123 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7"] Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.312399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtzgb\" (UniqueName: \"kubernetes.io/projected/ec41d8b3-c4d9-426f-8856-aeab408126a9-kube-api-access-wtzgb\") pod \"nmstate-operator-5b5b58f5c8-pj8z7\" (UID: \"ec41d8b3-c4d9-426f-8856-aeab408126a9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.413391 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtzgb\" (UniqueName: \"kubernetes.io/projected/ec41d8b3-c4d9-426f-8856-aeab408126a9-kube-api-access-wtzgb\") pod \"nmstate-operator-5b5b58f5c8-pj8z7\" (UID: \"ec41d8b3-c4d9-426f-8856-aeab408126a9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.431709 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtzgb\" (UniqueName: \"kubernetes.io/projected/ec41d8b3-c4d9-426f-8856-aeab408126a9-kube-api-access-wtzgb\") pod \"nmstate-operator-5b5b58f5c8-pj8z7\" (UID: \"ec41d8b3-c4d9-426f-8856-aeab408126a9\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" Dec 06 05:44:37 crc kubenswrapper[4958]: I1206 05:44:37.553824 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" Dec 06 05:44:38 crc kubenswrapper[4958]: I1206 05:44:37.999580 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7"] Dec 06 05:44:38 crc kubenswrapper[4958]: I1206 05:44:38.168603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" event={"ID":"ec41d8b3-c4d9-426f-8856-aeab408126a9","Type":"ContainerStarted","Data":"760384e95e3330bab1d07ab7ba5fe62eec704501bc54ddd04a8c845c53559f33"} Dec 06 05:44:48 crc kubenswrapper[4958]: I1206 05:44:48.227103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" event={"ID":"ec41d8b3-c4d9-426f-8856-aeab408126a9","Type":"ContainerStarted","Data":"ccf4a98941bcc09f38b898a652278bb5f0f9046824113957772465bb04919f68"} Dec 06 05:44:49 crc kubenswrapper[4958]: I1206 05:44:49.253194 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-pj8z7" podStartSLOduration=2.666743638 podStartE2EDuration="12.25316073s" podCreationTimestamp="2025-12-06 05:44:37 +0000 UTC" firstStartedPulling="2025-12-06 05:44:38.009803303 +0000 UTC m=+988.543574066" lastFinishedPulling="2025-12-06 05:44:47.596220355 +0000 UTC m=+998.129991158" observedRunningTime="2025-12-06 05:44:49.247048967 +0000 UTC m=+999.780819740" watchObservedRunningTime="2025-12-06 05:44:49.25316073 +0000 UTC m=+999.786931513" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.220339 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.221187 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.223321 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-qjpsw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.229544 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.230251 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.232296 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.234857 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.258135 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-fhd8b"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.260726 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.275512 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.358279 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.358944 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.363019 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.363395 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-fbfmq" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.368983 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.369740 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375268 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-nmstate-lock\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvs69\" (UniqueName: \"kubernetes.io/projected/eb092a25-46b9-4519-9bfd-a1a75207a121-kube-api-access-lvs69\") pod \"nmstate-metrics-7f946cbc9-w7z84\" (UID: \"eb092a25-46b9-4519-9bfd-a1a75207a121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-dbus-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-ovs-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375450 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4mmm\" (UniqueName: \"kubernetes.io/projected/5a48e60d-cf21-4b2f-bc99-eebce42f8832-kube-api-access-g4mmm\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.375492 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xctxj\" (UniqueName: \"kubernetes.io/projected/60563153-eb29-48b1-b275-4a426410c3f2-kube-api-access-xctxj\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478041 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-nmstate-lock\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-nmstate-lock\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478264 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4lvk\" (UniqueName: \"kubernetes.io/projected/132a681d-da4e-406e-897f-e8204a0b3061-kube-api-access-d4lvk\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478314 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvs69\" (UniqueName: \"kubernetes.io/projected/eb092a25-46b9-4519-9bfd-a1a75207a121-kube-api-access-lvs69\") pod \"nmstate-metrics-7f946cbc9-w7z84\" (UID: \"eb092a25-46b9-4519-9bfd-a1a75207a121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478356 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-dbus-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478387 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/132a681d-da4e-406e-897f-e8204a0b3061-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478435 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-ovs-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4mmm\" (UniqueName: \"kubernetes.io/projected/5a48e60d-cf21-4b2f-bc99-eebce42f8832-kube-api-access-g4mmm\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xctxj\" (UniqueName: \"kubernetes.io/projected/60563153-eb29-48b1-b275-4a426410c3f2-kube-api-access-xctxj\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: E1206 05:44:50.478833 4958 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.478829 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-ovs-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: E1206 05:44:50.478964 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair podName:5a48e60d-cf21-4b2f-bc99-eebce42f8832 nodeName:}" failed. No retries permitted until 2025-12-06 05:44:50.978935042 +0000 UTC m=+1001.512705805 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-wm7wx" (UID: "5a48e60d-cf21-4b2f-bc99-eebce42f8832") : secret "openshift-nmstate-webhook" not found Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.479126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/60563153-eb29-48b1-b275-4a426410c3f2-dbus-socket\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.501482 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4mmm\" (UniqueName: \"kubernetes.io/projected/5a48e60d-cf21-4b2f-bc99-eebce42f8832-kube-api-access-g4mmm\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.504220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvs69\" (UniqueName: \"kubernetes.io/projected/eb092a25-46b9-4519-9bfd-a1a75207a121-kube-api-access-lvs69\") pod \"nmstate-metrics-7f946cbc9-w7z84\" (UID: \"eb092a25-46b9-4519-9bfd-a1a75207a121\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.504324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xctxj\" (UniqueName: \"kubernetes.io/projected/60563153-eb29-48b1-b275-4a426410c3f2-kube-api-access-xctxj\") pod \"nmstate-handler-fhd8b\" (UID: \"60563153-eb29-48b1-b275-4a426410c3f2\") " pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.544840 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.555253 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-65948f7f79-j2lwp"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.556182 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.568095 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65948f7f79-j2lwp"] Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.580153 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.580210 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4lvk\" (UniqueName: \"kubernetes.io/projected/132a681d-da4e-406e-897f-e8204a0b3061-kube-api-access-d4lvk\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.580240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/132a681d-da4e-406e-897f-e8204a0b3061-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: E1206 05:44:50.580720 4958 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 06 05:44:50 crc kubenswrapper[4958]: E1206 05:44:50.580789 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert podName:132a681d-da4e-406e-897f-e8204a0b3061 nodeName:}" failed. No retries permitted until 2025-12-06 05:44:51.080768637 +0000 UTC m=+1001.614539410 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-zt7zw" (UID: "132a681d-da4e-406e-897f-e8204a0b3061") : secret "plugin-serving-cert" not found Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.581139 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/132a681d-da4e-406e-897f-e8204a0b3061-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.588558 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.595552 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4lvk\" (UniqueName: \"kubernetes.io/projected/132a681d-da4e-406e-897f-e8204a0b3061-kube-api-access-d4lvk\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:50 crc kubenswrapper[4958]: W1206 05:44:50.624127 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod60563153_eb29_48b1_b275_4a426410c3f2.slice/crio-a8754437872e1caa16a1014c6008dffd6fd9d07a27e0e00e3b0bb84909a7e5dc WatchSource:0}: Error finding container a8754437872e1caa16a1014c6008dffd6fd9d07a27e0e00e3b0bb84909a7e5dc: Status 404 returned error can't find the container with id a8754437872e1caa16a1014c6008dffd6fd9d07a27e0e00e3b0bb84909a7e5dc Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681126 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-oauth-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681454 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5lvw\" (UniqueName: \"kubernetes.io/projected/08d38124-9eb5-4fb7-87f2-cdb241ab227b-kube-api-access-k5lvw\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-trusted-ca-bundle\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681595 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-service-ca\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681629 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-oauth-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.681690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.744271 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84"] Dec 06 05:44:50 crc kubenswrapper[4958]: W1206 05:44:50.751838 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb092a25_46b9_4519_9bfd_a1a75207a121.slice/crio-d33420113da41b1d8ac8fb85310ba52daf3049a66193221305dcfde578ec1e1d WatchSource:0}: Error finding container d33420113da41b1d8ac8fb85310ba52daf3049a66193221305dcfde578ec1e1d: Status 404 returned error can't find the container with id d33420113da41b1d8ac8fb85310ba52daf3049a66193221305dcfde578ec1e1d Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782497 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-oauth-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5lvw\" (UniqueName: \"kubernetes.io/projected/08d38124-9eb5-4fb7-87f2-cdb241ab227b-kube-api-access-k5lvw\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782605 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-trusted-ca-bundle\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-service-ca\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782667 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-oauth-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.782704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.783531 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-oauth-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.783718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-service-ca\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.783965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.784484 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08d38124-9eb5-4fb7-87f2-cdb241ab227b-trusted-ca-bundle\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.786723 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-serving-cert\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.786787 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/08d38124-9eb5-4fb7-87f2-cdb241ab227b-console-oauth-config\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.798264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5lvw\" (UniqueName: \"kubernetes.io/projected/08d38124-9eb5-4fb7-87f2-cdb241ab227b-kube-api-access-k5lvw\") pod \"console-65948f7f79-j2lwp\" (UID: \"08d38124-9eb5-4fb7-87f2-cdb241ab227b\") " pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.949141 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.984746 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:50 crc kubenswrapper[4958]: I1206 05:44:50.990797 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/5a48e60d-cf21-4b2f-bc99-eebce42f8832-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-wm7wx\" (UID: \"5a48e60d-cf21-4b2f-bc99-eebce42f8832\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.100433 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.111652 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/132a681d-da4e-406e-897f-e8204a0b3061-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-zt7zw\" (UID: \"132a681d-da4e-406e-897f-e8204a0b3061\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.152123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.204779 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-65948f7f79-j2lwp"] Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.252980 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65948f7f79-j2lwp" event={"ID":"08d38124-9eb5-4fb7-87f2-cdb241ab227b","Type":"ContainerStarted","Data":"0a3194962338a1e327ce3762927705580af5b59dcdf1f852ce107b98ad384221"} Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.254256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" event={"ID":"eb092a25-46b9-4519-9bfd-a1a75207a121","Type":"ContainerStarted","Data":"d33420113da41b1d8ac8fb85310ba52daf3049a66193221305dcfde578ec1e1d"} Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.255070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fhd8b" event={"ID":"60563153-eb29-48b1-b275-4a426410c3f2","Type":"ContainerStarted","Data":"a8754437872e1caa16a1014c6008dffd6fd9d07a27e0e00e3b0bb84909a7e5dc"} Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.272942 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.343428 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx"] Dec 06 05:44:51 crc kubenswrapper[4958]: I1206 05:44:51.662422 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw"] Dec 06 05:44:52 crc kubenswrapper[4958]: I1206 05:44:52.261135 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" event={"ID":"132a681d-da4e-406e-897f-e8204a0b3061","Type":"ContainerStarted","Data":"756e9feffbdd6f4b3fc132502af6f9e90fe2b2e0c6d2e109dedabfaddd53041e"} Dec 06 05:44:52 crc kubenswrapper[4958]: I1206 05:44:52.262582 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-65948f7f79-j2lwp" event={"ID":"08d38124-9eb5-4fb7-87f2-cdb241ab227b","Type":"ContainerStarted","Data":"8d72aed1d7f0fc55b73c93f140fa7d1f4a3f7aa9befc236d59899b201dbe6002"} Dec 06 05:44:52 crc kubenswrapper[4958]: I1206 05:44:52.263661 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" event={"ID":"5a48e60d-cf21-4b2f-bc99-eebce42f8832","Type":"ContainerStarted","Data":"61a3f6d425d78a2d55ad8b87bdd9f89a352a632d3f934f8bc8c0d9821556c50b"} Dec 06 05:44:52 crc kubenswrapper[4958]: I1206 05:44:52.277943 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-65948f7f79-j2lwp" podStartSLOduration=2.277925393 podStartE2EDuration="2.277925393s" podCreationTimestamp="2025-12-06 05:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:44:52.27741196 +0000 UTC m=+1002.811182723" watchObservedRunningTime="2025-12-06 05:44:52.277925393 +0000 UTC m=+1002.811696156" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.153042 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht"] Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.155235 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.158192 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht"] Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.159384 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.159436 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.330276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwd7z\" (UniqueName: \"kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.330561 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.330606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.432286 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.432631 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.432672 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pwd7z\" (UniqueName: \"kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.438675 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.449361 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwd7z\" (UniqueName: \"kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.488960 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume\") pod \"collect-profiles-29416665-smrht\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.774638 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.949973 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.950041 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:45:00 crc kubenswrapper[4958]: I1206 05:45:00.959134 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:45:01 crc kubenswrapper[4958]: I1206 05:45:01.316719 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-65948f7f79-j2lwp" Dec 06 05:45:01 crc kubenswrapper[4958]: I1206 05:45:01.365955 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:45:05 crc kubenswrapper[4958]: I1206 05:45:05.304573 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht"] Dec 06 05:45:05 crc kubenswrapper[4958]: W1206 05:45:05.383270 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ab4d38c_3380_4f4f_90d6_286ad61d6067.slice/crio-3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909 WatchSource:0}: Error finding container 3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909: Status 404 returned error can't find the container with id 3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909 Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.347805 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" event={"ID":"132a681d-da4e-406e-897f-e8204a0b3061","Type":"ContainerStarted","Data":"ad8857bb04b1b5f60d43c42256d09ace0491a27374e39f7f5b0937a596b7313b"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.350810 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-fhd8b" event={"ID":"60563153-eb29-48b1-b275-4a426410c3f2","Type":"ContainerStarted","Data":"f3d49226cf76c70020e840309627342c135c96ae1bb0506c64c8b1454a641690"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.350887 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.352798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" event={"ID":"eb092a25-46b9-4519-9bfd-a1a75207a121","Type":"ContainerStarted","Data":"8a16f2f0f574262fab05cfcabcbfcccf40e9c88bad6a4792276b2c224c4e2550"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.354497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" event={"ID":"5a48e60d-cf21-4b2f-bc99-eebce42f8832","Type":"ContainerStarted","Data":"2a76bc2d8ab04d83002eb1c987766960a1e32d9d1a52839f8954586d875e27a5"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.354643 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.357750 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ab4d38c-3380-4f4f-90d6-286ad61d6067" containerID="846fac533291e6ed06a1bea832f228340b17d1442d0ad64e44d93ca99f719d20" exitCode=0 Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.357815 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" event={"ID":"3ab4d38c-3380-4f4f-90d6-286ad61d6067","Type":"ContainerDied","Data":"846fac533291e6ed06a1bea832f228340b17d1442d0ad64e44d93ca99f719d20"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.357994 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" event={"ID":"3ab4d38c-3380-4f4f-90d6-286ad61d6067","Type":"ContainerStarted","Data":"3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909"} Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.373008 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-zt7zw" podStartSLOduration=2.55353554 podStartE2EDuration="16.372987549s" podCreationTimestamp="2025-12-06 05:44:50 +0000 UTC" firstStartedPulling="2025-12-06 05:44:51.666692032 +0000 UTC m=+1002.200462795" lastFinishedPulling="2025-12-06 05:45:05.486144041 +0000 UTC m=+1016.019914804" observedRunningTime="2025-12-06 05:45:06.363346852 +0000 UTC m=+1016.897117615" watchObservedRunningTime="2025-12-06 05:45:06.372987549 +0000 UTC m=+1016.906758312" Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.394179 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" podStartSLOduration=2.253946503 podStartE2EDuration="16.394161383s" podCreationTimestamp="2025-12-06 05:44:50 +0000 UTC" firstStartedPulling="2025-12-06 05:44:51.355348732 +0000 UTC m=+1001.889119495" lastFinishedPulling="2025-12-06 05:45:05.495563612 +0000 UTC m=+1016.029334375" observedRunningTime="2025-12-06 05:45:06.388385359 +0000 UTC m=+1016.922156132" watchObservedRunningTime="2025-12-06 05:45:06.394161383 +0000 UTC m=+1016.927932146" Dec 06 05:45:06 crc kubenswrapper[4958]: I1206 05:45:06.415634 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-fhd8b" podStartSLOduration=1.5268905849999999 podStartE2EDuration="16.415614556s" podCreationTimestamp="2025-12-06 05:44:50 +0000 UTC" firstStartedPulling="2025-12-06 05:44:50.628189991 +0000 UTC m=+1001.161960754" lastFinishedPulling="2025-12-06 05:45:05.516913962 +0000 UTC m=+1016.050684725" observedRunningTime="2025-12-06 05:45:06.415345478 +0000 UTC m=+1016.949116251" watchObservedRunningTime="2025-12-06 05:45:06.415614556 +0000 UTC m=+1016.949385319" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.612868 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.747923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwd7z\" (UniqueName: \"kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z\") pod \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.748039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume\") pod \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.748100 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume\") pod \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\" (UID: \"3ab4d38c-3380-4f4f-90d6-286ad61d6067\") " Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.749180 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume" (OuterVolumeSpecName: "config-volume") pod "3ab4d38c-3380-4f4f-90d6-286ad61d6067" (UID: "3ab4d38c-3380-4f4f-90d6-286ad61d6067"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.754337 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3ab4d38c-3380-4f4f-90d6-286ad61d6067" (UID: "3ab4d38c-3380-4f4f-90d6-286ad61d6067"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.769352 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z" (OuterVolumeSpecName: "kube-api-access-pwd7z") pod "3ab4d38c-3380-4f4f-90d6-286ad61d6067" (UID: "3ab4d38c-3380-4f4f-90d6-286ad61d6067"). InnerVolumeSpecName "kube-api-access-pwd7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.849719 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3ab4d38c-3380-4f4f-90d6-286ad61d6067-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.849753 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ab4d38c-3380-4f4f-90d6-286ad61d6067-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:07 crc kubenswrapper[4958]: I1206 05:45:07.849764 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pwd7z\" (UniqueName: \"kubernetes.io/projected/3ab4d38c-3380-4f4f-90d6-286ad61d6067-kube-api-access-pwd7z\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:08 crc kubenswrapper[4958]: I1206 05:45:08.368852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" event={"ID":"3ab4d38c-3380-4f4f-90d6-286ad61d6067","Type":"ContainerDied","Data":"3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909"} Dec 06 05:45:08 crc kubenswrapper[4958]: I1206 05:45:08.369136 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3284f1bf2bf197b57bcca0ad60bd117c3316888502177c8962bdda4c534e4909" Dec 06 05:45:08 crc kubenswrapper[4958]: I1206 05:45:08.368924 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht" Dec 06 05:45:09 crc kubenswrapper[4958]: I1206 05:45:09.376588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" event={"ID":"eb092a25-46b9-4519-9bfd-a1a75207a121","Type":"ContainerStarted","Data":"53c1da5f55046d6663727c60cec12a39a12e138e97ee2a9ce0231bed83413b23"} Dec 06 05:45:09 crc kubenswrapper[4958]: I1206 05:45:09.396080 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-w7z84" podStartSLOduration=1.7066635369999998 podStartE2EDuration="19.396056518s" podCreationTimestamp="2025-12-06 05:44:50 +0000 UTC" firstStartedPulling="2025-12-06 05:44:50.75423212 +0000 UTC m=+1001.288002883" lastFinishedPulling="2025-12-06 05:45:08.443625101 +0000 UTC m=+1018.977395864" observedRunningTime="2025-12-06 05:45:09.391464566 +0000 UTC m=+1019.925235339" watchObservedRunningTime="2025-12-06 05:45:09.396056518 +0000 UTC m=+1019.929827281" Dec 06 05:45:09 crc kubenswrapper[4958]: I1206 05:45:09.866756 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:45:09 crc kubenswrapper[4958]: I1206 05:45:09.867130 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:45:10 crc kubenswrapper[4958]: I1206 05:45:10.610700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-fhd8b" Dec 06 05:45:21 crc kubenswrapper[4958]: I1206 05:45:21.157179 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" Dec 06 05:45:26 crc kubenswrapper[4958]: I1206 05:45:26.407350 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-7g42z" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerName="console" containerID="cri-o://eb0fe0e98006b13ee7a4a508f1cac29237344772e7558957199f37cf6352de69" gracePeriod=15 Dec 06 05:45:27 crc kubenswrapper[4958]: I1206 05:45:27.484810 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7g42z_bae936cb-2b16-4e6a-b2b1-bc185483cd8f/console/0.log" Dec 06 05:45:27 crc kubenswrapper[4958]: I1206 05:45:27.484862 4958 generic.go:334] "Generic (PLEG): container finished" podID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerID="eb0fe0e98006b13ee7a4a508f1cac29237344772e7558957199f37cf6352de69" exitCode=2 Dec 06 05:45:27 crc kubenswrapper[4958]: I1206 05:45:27.484902 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7g42z" event={"ID":"bae936cb-2b16-4e6a-b2b1-bc185483cd8f","Type":"ContainerDied","Data":"eb0fe0e98006b13ee7a4a508f1cac29237344772e7558957199f37cf6352de69"} Dec 06 05:45:27 crc kubenswrapper[4958]: I1206 05:45:27.872079 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7g42z_bae936cb-2b16-4e6a-b2b1-bc185483cd8f/console/0.log" Dec 06 05:45:27 crc kubenswrapper[4958]: I1206 05:45:27.872423 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025144 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025419 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jblqp\" (UniqueName: \"kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025446 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025542 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025563 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.025607 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config\") pod \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\" (UID: \"bae936cb-2b16-4e6a-b2b1-bc185483cd8f\") " Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.026181 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config" (OuterVolumeSpecName: "console-config") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.026236 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.026251 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.026266 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca" (OuterVolumeSpecName: "service-ca") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.031278 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.031309 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp" (OuterVolumeSpecName: "kube-api-access-jblqp") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "kube-api-access-jblqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.031367 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bae936cb-2b16-4e6a-b2b1-bc185483cd8f" (UID: "bae936cb-2b16-4e6a-b2b1-bc185483cd8f"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127570 4958 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127615 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jblqp\" (UniqueName: \"kubernetes.io/projected/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-kube-api-access-jblqp\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127632 4958 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-service-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127641 4958 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127650 4958 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127658 4958 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.127666 4958 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bae936cb-2b16-4e6a-b2b1-bc185483cd8f-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.491620 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-7g42z_bae936cb-2b16-4e6a-b2b1-bc185483cd8f/console/0.log" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.491978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-7g42z" event={"ID":"bae936cb-2b16-4e6a-b2b1-bc185483cd8f","Type":"ContainerDied","Data":"d74575eef3116372cd1bc6b13beb18db2a6b7a8a77ff2dbb0a463577d20ab8d1"} Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.492047 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-7g42z" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.492065 4958 scope.go:117] "RemoveContainer" containerID="eb0fe0e98006b13ee7a4a508f1cac29237344772e7558957199f37cf6352de69" Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.535351 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:45:28 crc kubenswrapper[4958]: I1206 05:45:28.544100 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-7g42z"] Dec 06 05:45:29 crc kubenswrapper[4958]: I1206 05:45:29.780313 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" path="/var/lib/kubelet/pods/bae936cb-2b16-4e6a-b2b1-bc185483cd8f/volumes" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.858796 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt"] Dec 06 05:45:34 crc kubenswrapper[4958]: E1206 05:45:34.859540 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerName="console" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.859552 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerName="console" Dec 06 05:45:34 crc kubenswrapper[4958]: E1206 05:45:34.859565 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab4d38c-3380-4f4f-90d6-286ad61d6067" containerName="collect-profiles" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.859570 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab4d38c-3380-4f4f-90d6-286ad61d6067" containerName="collect-profiles" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.859706 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bae936cb-2b16-4e6a-b2b1-bc185483cd8f" containerName="console" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.859719 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab4d38c-3380-4f4f-90d6-286ad61d6067" containerName="collect-profiles" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.860533 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.862319 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.868633 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt"] Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.928093 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.928212 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:34 crc kubenswrapper[4958]: I1206 05:45:34.928363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m97cd\" (UniqueName: \"kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.029560 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m97cd\" (UniqueName: \"kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.029671 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.029735 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.030278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.030733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.052801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m97cd\" (UniqueName: \"kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.176664 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.390540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt"] Dec 06 05:45:35 crc kubenswrapper[4958]: I1206 05:45:35.543755 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" event={"ID":"0fba650c-0f74-49d3-baa8-56f79c241413","Type":"ContainerStarted","Data":"628dc5c320e15ae07233c66ca92c7f9b26688184e47d4ace73bd385ef0984df3"} Dec 06 05:45:36 crc kubenswrapper[4958]: I1206 05:45:36.553216 4958 generic.go:334] "Generic (PLEG): container finished" podID="0fba650c-0f74-49d3-baa8-56f79c241413" containerID="a43fcbbc7fbde0a3fcb71e546c1227caefe35e3cf10208a09f356f8a83ffc015" exitCode=0 Dec 06 05:45:36 crc kubenswrapper[4958]: I1206 05:45:36.553283 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" event={"ID":"0fba650c-0f74-49d3-baa8-56f79c241413","Type":"ContainerDied","Data":"a43fcbbc7fbde0a3fcb71e546c1227caefe35e3cf10208a09f356f8a83ffc015"} Dec 06 05:45:38 crc kubenswrapper[4958]: I1206 05:45:38.565798 4958 generic.go:334] "Generic (PLEG): container finished" podID="0fba650c-0f74-49d3-baa8-56f79c241413" containerID="6f82395658bcd8ba44da7645dd63610eb2c4ebacba03b02a4237b30282090d30" exitCode=0 Dec 06 05:45:38 crc kubenswrapper[4958]: I1206 05:45:38.565912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" event={"ID":"0fba650c-0f74-49d3-baa8-56f79c241413","Type":"ContainerDied","Data":"6f82395658bcd8ba44da7645dd63610eb2c4ebacba03b02a4237b30282090d30"} Dec 06 05:45:39 crc kubenswrapper[4958]: I1206 05:45:39.574863 4958 generic.go:334] "Generic (PLEG): container finished" podID="0fba650c-0f74-49d3-baa8-56f79c241413" containerID="dd40a8e2cd470f4f0681f22b86c80a60104e33cba31751826281c1ed20cd2531" exitCode=0 Dec 06 05:45:39 crc kubenswrapper[4958]: I1206 05:45:39.574937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" event={"ID":"0fba650c-0f74-49d3-baa8-56f79c241413","Type":"ContainerDied","Data":"dd40a8e2cd470f4f0681f22b86c80a60104e33cba31751826281c1ed20cd2531"} Dec 06 05:45:39 crc kubenswrapper[4958]: I1206 05:45:39.866609 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:45:39 crc kubenswrapper[4958]: I1206 05:45:39.866680 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:45:40 crc kubenswrapper[4958]: I1206 05:45:40.805254 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.004095 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle\") pod \"0fba650c-0f74-49d3-baa8-56f79c241413\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.004547 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util\") pod \"0fba650c-0f74-49d3-baa8-56f79c241413\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.004599 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m97cd\" (UniqueName: \"kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd\") pod \"0fba650c-0f74-49d3-baa8-56f79c241413\" (UID: \"0fba650c-0f74-49d3-baa8-56f79c241413\") " Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.005108 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle" (OuterVolumeSpecName: "bundle") pod "0fba650c-0f74-49d3-baa8-56f79c241413" (UID: "0fba650c-0f74-49d3-baa8-56f79c241413"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.010347 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd" (OuterVolumeSpecName: "kube-api-access-m97cd") pod "0fba650c-0f74-49d3-baa8-56f79c241413" (UID: "0fba650c-0f74-49d3-baa8-56f79c241413"). InnerVolumeSpecName "kube-api-access-m97cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.021134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util" (OuterVolumeSpecName: "util") pod "0fba650c-0f74-49d3-baa8-56f79c241413" (UID: "0fba650c-0f74-49d3-baa8-56f79c241413"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.105960 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-util\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.105996 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m97cd\" (UniqueName: \"kubernetes.io/projected/0fba650c-0f74-49d3-baa8-56f79c241413-kube-api-access-m97cd\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.106010 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/0fba650c-0f74-49d3-baa8-56f79c241413-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.588340 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" event={"ID":"0fba650c-0f74-49d3-baa8-56f79c241413","Type":"ContainerDied","Data":"628dc5c320e15ae07233c66ca92c7f9b26688184e47d4ace73bd385ef0984df3"} Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.588384 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="628dc5c320e15ae07233c66ca92c7f9b26688184e47d4ace73bd385ef0984df3" Dec 06 05:45:41 crc kubenswrapper[4958]: I1206 05:45:41.588406 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.831330 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m"] Dec 06 05:45:51 crc kubenswrapper[4958]: E1206 05:45:51.832148 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="extract" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.832163 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="extract" Dec 06 05:45:51 crc kubenswrapper[4958]: E1206 05:45:51.832179 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="pull" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.832186 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="pull" Dec 06 05:45:51 crc kubenswrapper[4958]: E1206 05:45:51.832198 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="util" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.832205 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="util" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.832309 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fba650c-0f74-49d3-baa8-56f79c241413" containerName="extract" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.832773 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.835327 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.835708 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.835810 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.836448 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.836541 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-z8whf" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.853779 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m"] Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.946623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glxsh\" (UniqueName: \"kubernetes.io/projected/c9785861-cb8c-4a0b-82b3-a7425c228197-kube-api-access-glxsh\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.946702 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-webhook-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:51 crc kubenswrapper[4958]: I1206 05:45:51.946742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-apiservice-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.048776 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glxsh\" (UniqueName: \"kubernetes.io/projected/c9785861-cb8c-4a0b-82b3-a7425c228197-kube-api-access-glxsh\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.048874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-webhook-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.048907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-apiservice-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.054290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-apiservice-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.058111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c9785861-cb8c-4a0b-82b3-a7425c228197-webhook-cert\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.083715 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glxsh\" (UniqueName: \"kubernetes.io/projected/c9785861-cb8c-4a0b-82b3-a7425c228197-kube-api-access-glxsh\") pod \"metallb-operator-controller-manager-7f7f549dcb-v626m\" (UID: \"c9785861-cb8c-4a0b-82b3-a7425c228197\") " pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.148981 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.196918 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5"] Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.197848 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.200478 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.200811 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.202632 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-chrq9" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.209598 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5"] Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.352122 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-apiservice-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.352216 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pztg8\" (UniqueName: \"kubernetes.io/projected/5191ef31-713e-4f5d-9407-0f1d1f0ed462-kube-api-access-pztg8\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.352250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-webhook-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.453103 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-apiservice-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.453173 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pztg8\" (UniqueName: \"kubernetes.io/projected/5191ef31-713e-4f5d-9407-0f1d1f0ed462-kube-api-access-pztg8\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.453198 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-webhook-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.457357 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-webhook-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.457461 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5191ef31-713e-4f5d-9407-0f1d1f0ed462-apiservice-cert\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.467176 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pztg8\" (UniqueName: \"kubernetes.io/projected/5191ef31-713e-4f5d-9407-0f1d1f0ed462-kube-api-access-pztg8\") pod \"metallb-operator-webhook-server-7db7d4c645-s4nl5\" (UID: \"5191ef31-713e-4f5d-9407-0f1d1f0ed462\") " pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.516398 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.619931 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m"] Dec 06 05:45:52 crc kubenswrapper[4958]: W1206 05:45:52.625077 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9785861_cb8c_4a0b_82b3_a7425c228197.slice/crio-e2f4b5d5c86152025fee8949c9dd4d200a88bcf3165916b282a1920bf42c435d WatchSource:0}: Error finding container e2f4b5d5c86152025fee8949c9dd4d200a88bcf3165916b282a1920bf42c435d: Status 404 returned error can't find the container with id e2f4b5d5c86152025fee8949c9dd4d200a88bcf3165916b282a1920bf42c435d Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.665698 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" event={"ID":"c9785861-cb8c-4a0b-82b3-a7425c228197","Type":"ContainerStarted","Data":"e2f4b5d5c86152025fee8949c9dd4d200a88bcf3165916b282a1920bf42c435d"} Dec 06 05:45:52 crc kubenswrapper[4958]: W1206 05:45:52.751077 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5191ef31_713e_4f5d_9407_0f1d1f0ed462.slice/crio-023d9d82553497105973b4141ca3bcc94816185386f9c2bf46fc724064def53b WatchSource:0}: Error finding container 023d9d82553497105973b4141ca3bcc94816185386f9c2bf46fc724064def53b: Status 404 returned error can't find the container with id 023d9d82553497105973b4141ca3bcc94816185386f9c2bf46fc724064def53b Dec 06 05:45:52 crc kubenswrapper[4958]: I1206 05:45:52.751408 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5"] Dec 06 05:45:53 crc kubenswrapper[4958]: I1206 05:45:53.671978 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" event={"ID":"5191ef31-713e-4f5d-9407-0f1d1f0ed462","Type":"ContainerStarted","Data":"023d9d82553497105973b4141ca3bcc94816185386f9c2bf46fc724064def53b"} Dec 06 05:46:00 crc kubenswrapper[4958]: I1206 05:46:00.738538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" event={"ID":"5191ef31-713e-4f5d-9407-0f1d1f0ed462","Type":"ContainerStarted","Data":"1456c31a7adef0840b4a6ce55a5ce7d2a37ae4436b7e7f07fa9d05762a8421f3"} Dec 06 05:46:00 crc kubenswrapper[4958]: I1206 05:46:00.739185 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:46:00 crc kubenswrapper[4958]: I1206 05:46:00.740846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" event={"ID":"c9785861-cb8c-4a0b-82b3-a7425c228197","Type":"ContainerStarted","Data":"f4a10f633f67d7b3b756d1830cd1e9a940edc28023b210394d4f5bf9e0e6690a"} Dec 06 05:46:00 crc kubenswrapper[4958]: I1206 05:46:00.740984 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:46:00 crc kubenswrapper[4958]: I1206 05:46:00.757626 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" podStartSLOduration=1.6667426939999999 podStartE2EDuration="8.757609488s" podCreationTimestamp="2025-12-06 05:45:52 +0000 UTC" firstStartedPulling="2025-12-06 05:45:52.75381089 +0000 UTC m=+1063.287581653" lastFinishedPulling="2025-12-06 05:45:59.844677684 +0000 UTC m=+1070.378448447" observedRunningTime="2025-12-06 05:46:00.756218531 +0000 UTC m=+1071.289989294" watchObservedRunningTime="2025-12-06 05:46:00.757609488 +0000 UTC m=+1071.291380251" Dec 06 05:46:09 crc kubenswrapper[4958]: I1206 05:46:09.866286 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:46:09 crc kubenswrapper[4958]: I1206 05:46:09.866915 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:46:09 crc kubenswrapper[4958]: I1206 05:46:09.866991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:46:09 crc kubenswrapper[4958]: I1206 05:46:09.867714 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:46:09 crc kubenswrapper[4958]: I1206 05:46:09.867784 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7" gracePeriod=600 Dec 06 05:46:11 crc kubenswrapper[4958]: I1206 05:46:11.804644 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7" exitCode=0 Dec 06 05:46:11 crc kubenswrapper[4958]: I1206 05:46:11.804682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7"} Dec 06 05:46:11 crc kubenswrapper[4958]: I1206 05:46:11.804981 4958 scope.go:117] "RemoveContainer" containerID="54b4894bbf7e81e569496756397053ebc513bca9497efd2cd9161604c907d3ec" Dec 06 05:46:12 crc kubenswrapper[4958]: I1206 05:46:12.523300 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7db7d4c645-s4nl5" Dec 06 05:46:12 crc kubenswrapper[4958]: I1206 05:46:12.540757 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" podStartSLOduration=14.345304291 podStartE2EDuration="21.540739173s" podCreationTimestamp="2025-12-06 05:45:51 +0000 UTC" firstStartedPulling="2025-12-06 05:45:52.63149203 +0000 UTC m=+1063.165262793" lastFinishedPulling="2025-12-06 05:45:59.826926912 +0000 UTC m=+1070.360697675" observedRunningTime="2025-12-06 05:46:00.78020868 +0000 UTC m=+1071.313979443" watchObservedRunningTime="2025-12-06 05:46:12.540739173 +0000 UTC m=+1083.074509936" Dec 06 05:46:12 crc kubenswrapper[4958]: I1206 05:46:12.813131 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0"} Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.152454 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7f7f549dcb-v626m" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.884088 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-pvdb2"] Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.886610 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.888827 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.891904 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-jtjjx" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.892016 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.901754 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4"] Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.902423 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.905648 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.917721 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4"] Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.966155 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4ksm2"] Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.967313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4ksm2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.970962 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.971187 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.971210 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.971543 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-g84dh" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.984945 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrg2z\" (UniqueName: \"kubernetes.io/projected/b3e2fd62-29f5-4627-91c0-581a8645568b-kube-api-access-zrg2z\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985032 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j2gw\" (UniqueName: \"kubernetes.io/projected/450ae51d-8948-4c22-8336-efd2ed9f71d3-kube-api-access-7j2gw\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985063 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-reloader\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99q4j\" (UniqueName: \"kubernetes.io/projected/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-kube-api-access-99q4j\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985228 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics-certs\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985305 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-sockets\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b3e2fd62-29f5-4627-91c0-581a8645568b-metallb-excludel2\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985419 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-conf\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-startup\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.985463 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/450ae51d-8948-4c22-8336-efd2ed9f71d3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.994517 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-55kcs"] Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.995629 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:32 crc kubenswrapper[4958]: I1206 05:46:32.997518 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.007900 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-55kcs"] Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.086392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrg2z\" (UniqueName: \"kubernetes.io/projected/b3e2fd62-29f5-4627-91c0-581a8645568b-kube-api-access-zrg2z\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.086713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j2gw\" (UniqueName: \"kubernetes.io/projected/450ae51d-8948-4c22-8336-efd2ed9f71d3-kube-api-access-7j2gw\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.086800 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.086892 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-reloader\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.086975 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-cert\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.086927 4958 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99q4j\" (UniqueName: \"kubernetes.io/projected/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-kube-api-access-99q4j\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.087112 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs podName:b3e2fd62-29f5-4627-91c0-581a8645568b nodeName:}" failed. No retries permitted until 2025-12-06 05:46:33.587089997 +0000 UTC m=+1104.120860750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs") pod "speaker-4ksm2" (UID: "b3e2fd62-29f5-4627-91c0-581a8645568b") : secret "speaker-certs-secret" not found Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087234 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics-certs\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-reloader\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087286 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-metrics-certs\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087452 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4tmz\" (UniqueName: \"kubernetes.io/projected/09419a79-968a-4da5-8ab8-f9abb9508ac5-kube-api-access-t4tmz\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087557 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-sockets\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.087595 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.087674 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist podName:b3e2fd62-29f5-4627-91c0-581a8645568b nodeName:}" failed. No retries permitted until 2025-12-06 05:46:33.587634912 +0000 UTC m=+1104.121405675 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist") pod "speaker-4ksm2" (UID: "b3e2fd62-29f5-4627-91c0-581a8645568b") : secret "metallb-memberlist" not found Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b3e2fd62-29f5-4627-91c0-581a8645568b-metallb-excludel2\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-conf\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.087859 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.088100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-sockets\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.088185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-conf\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.088332 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b3e2fd62-29f5-4627-91c0-581a8645568b-metallb-excludel2\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.088395 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-startup\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.088428 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/450ae51d-8948-4c22-8336-efd2ed9f71d3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.089081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-frr-startup\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.095296 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-metrics-certs\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.101186 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/450ae51d-8948-4c22-8336-efd2ed9f71d3-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.104535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j2gw\" (UniqueName: \"kubernetes.io/projected/450ae51d-8948-4c22-8336-efd2ed9f71d3-kube-api-access-7j2gw\") pod \"frr-k8s-webhook-server-7fcb986d4-9gmd4\" (UID: \"450ae51d-8948-4c22-8336-efd2ed9f71d3\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.107221 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrg2z\" (UniqueName: \"kubernetes.io/projected/b3e2fd62-29f5-4627-91c0-581a8645568b-kube-api-access-zrg2z\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.111996 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99q4j\" (UniqueName: \"kubernetes.io/projected/f44e552e-a8cb-4abf-bb5c-cfbde43b518b-kube-api-access-99q4j\") pod \"frr-k8s-pvdb2\" (UID: \"f44e552e-a8cb-4abf-bb5c-cfbde43b518b\") " pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.189990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4tmz\" (UniqueName: \"kubernetes.io/projected/09419a79-968a-4da5-8ab8-f9abb9508ac5-kube-api-access-t4tmz\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.190157 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-cert\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.190448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-metrics-certs\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.191782 4958 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.197070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-metrics-certs\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.204290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/09419a79-968a-4da5-8ab8-f9abb9508ac5-cert\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.204663 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.210964 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4tmz\" (UniqueName: \"kubernetes.io/projected/09419a79-968a-4da5-8ab8-f9abb9508ac5-kube-api-access-t4tmz\") pod \"controller-f8648f98b-55kcs\" (UID: \"09419a79-968a-4da5-8ab8-f9abb9508ac5\") " pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.217711 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.309729 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.594265 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.594672 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.594811 4958 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 06 05:46:33 crc kubenswrapper[4958]: E1206 05:46:33.594877 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist podName:b3e2fd62-29f5-4627-91c0-581a8645568b nodeName:}" failed. No retries permitted until 2025-12-06 05:46:34.594859932 +0000 UTC m=+1105.128630695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist") pod "speaker-4ksm2" (UID: "b3e2fd62-29f5-4627-91c0-581a8645568b") : secret "metallb-memberlist" not found Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.601727 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-metrics-certs\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.613695 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4"] Dec 06 05:46:33 crc kubenswrapper[4958]: W1206 05:46:33.615699 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod450ae51d_8948_4c22_8336_efd2ed9f71d3.slice/crio-9259a7ad2e2087ed02265418497ecfd2d7e57c3a530c9f1e297dc36cdd2b9ae4 WatchSource:0}: Error finding container 9259a7ad2e2087ed02265418497ecfd2d7e57c3a530c9f1e297dc36cdd2b9ae4: Status 404 returned error can't find the container with id 9259a7ad2e2087ed02265418497ecfd2d7e57c3a530c9f1e297dc36cdd2b9ae4 Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.696129 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-55kcs"] Dec 06 05:46:33 crc kubenswrapper[4958]: W1206 05:46:33.699078 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod09419a79_968a_4da5_8ab8_f9abb9508ac5.slice/crio-6b24ad6fb3ba1f377eabdb01a5e7b90f223c080c9cd2df3f51e76b426a554043 WatchSource:0}: Error finding container 6b24ad6fb3ba1f377eabdb01a5e7b90f223c080c9cd2df3f51e76b426a554043: Status 404 returned error can't find the container with id 6b24ad6fb3ba1f377eabdb01a5e7b90f223c080c9cd2df3f51e76b426a554043 Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.934347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-55kcs" event={"ID":"09419a79-968a-4da5-8ab8-f9abb9508ac5","Type":"ContainerStarted","Data":"8d56cdf40dc64af9b45c46d3f9774daab57c1d49a7355164c89a25eed60e0873"} Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.934862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-55kcs" event={"ID":"09419a79-968a-4da5-8ab8-f9abb9508ac5","Type":"ContainerStarted","Data":"e75cff1a69b4c02456949d19b6a9e44fc528123dcf8771eafb84cd0e911403cf"} Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.934880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-55kcs" event={"ID":"09419a79-968a-4da5-8ab8-f9abb9508ac5","Type":"ContainerStarted","Data":"6b24ad6fb3ba1f377eabdb01a5e7b90f223c080c9cd2df3f51e76b426a554043"} Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.934900 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.935535 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" event={"ID":"450ae51d-8948-4c22-8336-efd2ed9f71d3","Type":"ContainerStarted","Data":"9259a7ad2e2087ed02265418497ecfd2d7e57c3a530c9f1e297dc36cdd2b9ae4"} Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.936678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"1e9ac74ce5f947a61d9bd30708fdcafa194094da02e44c7f0c3b3d0fb4e67071"} Dec 06 05:46:33 crc kubenswrapper[4958]: I1206 05:46:33.951966 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-55kcs" podStartSLOduration=1.95194944 podStartE2EDuration="1.95194944s" podCreationTimestamp="2025-12-06 05:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:46:33.947872561 +0000 UTC m=+1104.481643334" watchObservedRunningTime="2025-12-06 05:46:33.95194944 +0000 UTC m=+1104.485720203" Dec 06 05:46:34 crc kubenswrapper[4958]: I1206 05:46:34.611492 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:34 crc kubenswrapper[4958]: I1206 05:46:34.625767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b3e2fd62-29f5-4627-91c0-581a8645568b-memberlist\") pod \"speaker-4ksm2\" (UID: \"b3e2fd62-29f5-4627-91c0-581a8645568b\") " pod="metallb-system/speaker-4ksm2" Dec 06 05:46:34 crc kubenswrapper[4958]: I1206 05:46:34.782714 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4ksm2" Dec 06 05:46:34 crc kubenswrapper[4958]: W1206 05:46:34.807322 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3e2fd62_29f5_4627_91c0_581a8645568b.slice/crio-bc06e80f04fc219d0349338472ff0b8cf541e50ec8f0e39f37116f37fd4960f8 WatchSource:0}: Error finding container bc06e80f04fc219d0349338472ff0b8cf541e50ec8f0e39f37116f37fd4960f8: Status 404 returned error can't find the container with id bc06e80f04fc219d0349338472ff0b8cf541e50ec8f0e39f37116f37fd4960f8 Dec 06 05:46:34 crc kubenswrapper[4958]: I1206 05:46:34.944737 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4ksm2" event={"ID":"b3e2fd62-29f5-4627-91c0-581a8645568b","Type":"ContainerStarted","Data":"bc06e80f04fc219d0349338472ff0b8cf541e50ec8f0e39f37116f37fd4960f8"} Dec 06 05:46:35 crc kubenswrapper[4958]: I1206 05:46:35.965812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4ksm2" event={"ID":"b3e2fd62-29f5-4627-91c0-581a8645568b","Type":"ContainerStarted","Data":"8cd14005e0d6535a3ebb674a6659de8745601b79f396f4e29b341c643c51e5a2"} Dec 06 05:46:35 crc kubenswrapper[4958]: I1206 05:46:35.965887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4ksm2" event={"ID":"b3e2fd62-29f5-4627-91c0-581a8645568b","Type":"ContainerStarted","Data":"9b6c43c7bdbb632e70a5554871a56bc2244c5b2d9cef127ad8f49bed74524016"} Dec 06 05:46:35 crc kubenswrapper[4958]: I1206 05:46:35.966083 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4ksm2" Dec 06 05:46:39 crc kubenswrapper[4958]: I1206 05:46:39.785243 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4ksm2" podStartSLOduration=7.785220192 podStartE2EDuration="7.785220192s" podCreationTimestamp="2025-12-06 05:46:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:46:35.989059108 +0000 UTC m=+1106.522829871" watchObservedRunningTime="2025-12-06 05:46:39.785220192 +0000 UTC m=+1110.318990975" Dec 06 05:46:42 crc kubenswrapper[4958]: I1206 05:46:42.017939 4958 generic.go:334] "Generic (PLEG): container finished" podID="f44e552e-a8cb-4abf-bb5c-cfbde43b518b" containerID="961a6ea8b6c7e5229688869714172cbb68c8a976f59bca1d3f88c8b6f558b5fe" exitCode=0 Dec 06 05:46:42 crc kubenswrapper[4958]: I1206 05:46:42.018386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerDied","Data":"961a6ea8b6c7e5229688869714172cbb68c8a976f59bca1d3f88c8b6f558b5fe"} Dec 06 05:46:42 crc kubenswrapper[4958]: I1206 05:46:42.022508 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" event={"ID":"450ae51d-8948-4c22-8336-efd2ed9f71d3","Type":"ContainerStarted","Data":"fc6c75afd8103fabc14f7e7dbb7e12fa90fdbf07798c062326e3fd0f655d4c10"} Dec 06 05:46:42 crc kubenswrapper[4958]: I1206 05:46:42.022697 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:42 crc kubenswrapper[4958]: I1206 05:46:42.083041 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" podStartSLOduration=1.9225234900000001 podStartE2EDuration="10.083012715s" podCreationTimestamp="2025-12-06 05:46:32 +0000 UTC" firstStartedPulling="2025-12-06 05:46:33.617633208 +0000 UTC m=+1104.151403971" lastFinishedPulling="2025-12-06 05:46:41.778122433 +0000 UTC m=+1112.311893196" observedRunningTime="2025-12-06 05:46:42.075444792 +0000 UTC m=+1112.609215555" watchObservedRunningTime="2025-12-06 05:46:42.083012715 +0000 UTC m=+1112.616783478" Dec 06 05:46:43 crc kubenswrapper[4958]: I1206 05:46:43.034531 4958 generic.go:334] "Generic (PLEG): container finished" podID="f44e552e-a8cb-4abf-bb5c-cfbde43b518b" containerID="bf2dd1fef930787dab7d2995e9523a68ba9ae5da1e8bc505e2802e272250f279" exitCode=0 Dec 06 05:46:43 crc kubenswrapper[4958]: I1206 05:46:43.034630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerDied","Data":"bf2dd1fef930787dab7d2995e9523a68ba9ae5da1e8bc505e2802e272250f279"} Dec 06 05:46:43 crc kubenswrapper[4958]: I1206 05:46:43.314585 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-55kcs" Dec 06 05:46:44 crc kubenswrapper[4958]: I1206 05:46:44.042100 4958 generic.go:334] "Generic (PLEG): container finished" podID="f44e552e-a8cb-4abf-bb5c-cfbde43b518b" containerID="1c82a31bc1f21fd0b9955c2bcfda42ca30493bced250ac3e2da25265662e3741" exitCode=0 Dec 06 05:46:44 crc kubenswrapper[4958]: I1206 05:46:44.042853 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerDied","Data":"1c82a31bc1f21fd0b9955c2bcfda42ca30493bced250ac3e2da25265662e3741"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054838 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"4b6b77f6d40766e583839711990e5b76799af8be358c40dfc34d0f2e89610fb8"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054885 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"289f080026742007646cb760f95984927b76c7a69ca7735bbfab32ddd553491c"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"953c689d064616f42ed71cd0d86775582680a41a0d82e77fe5d88f691d40382b"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054907 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"65f39785d6199eab881505783ad9c285595f97ec8ba301fc514d38e90d573b04"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054917 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"7080056af075e7a9f70659f59bdb536eaa83633e3b1325cbc4cb4b6eda4beca2"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.054926 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-pvdb2" event={"ID":"f44e552e-a8cb-4abf-bb5c-cfbde43b518b","Type":"ContainerStarted","Data":"0add8dd7594ea841d7a80f4d053aaca0f997ca0c14a3b1097e3c38ddb235303b"} Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.056342 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:45 crc kubenswrapper[4958]: I1206 05:46:45.081616 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-pvdb2" podStartSLOduration=4.649331072 podStartE2EDuration="13.081599033s" podCreationTimestamp="2025-12-06 05:46:32 +0000 UTC" firstStartedPulling="2025-12-06 05:46:33.362464617 +0000 UTC m=+1103.896235380" lastFinishedPulling="2025-12-06 05:46:41.794732538 +0000 UTC m=+1112.328503341" observedRunningTime="2025-12-06 05:46:45.073807144 +0000 UTC m=+1115.607577907" watchObservedRunningTime="2025-12-06 05:46:45.081599033 +0000 UTC m=+1115.615369796" Dec 06 05:46:48 crc kubenswrapper[4958]: I1206 05:46:48.205125 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:48 crc kubenswrapper[4958]: I1206 05:46:48.244240 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:46:53 crc kubenswrapper[4958]: I1206 05:46:53.228062 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-9gmd4" Dec 06 05:46:54 crc kubenswrapper[4958]: I1206 05:46:54.788528 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4ksm2" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.605504 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.606718 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.609831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mcv\" (UniqueName: \"kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv\") pod \"openstack-operator-index-l65dd\" (UID: \"90c0ad5e-651b-41ae-9c98-851d6e70d94b\") " pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.614033 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-gffjs" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.614215 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.615246 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.617197 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.711746 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mcv\" (UniqueName: \"kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv\") pod \"openstack-operator-index-l65dd\" (UID: \"90c0ad5e-651b-41ae-9c98-851d6e70d94b\") " pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.733953 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mcv\" (UniqueName: \"kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv\") pod \"openstack-operator-index-l65dd\" (UID: \"90c0ad5e-651b-41ae-9c98-851d6e70d94b\") " pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:46:57 crc kubenswrapper[4958]: I1206 05:46:57.928731 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:46:58 crc kubenswrapper[4958]: I1206 05:46:58.338360 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:46:58 crc kubenswrapper[4958]: I1206 05:46:58.347325 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 05:46:59 crc kubenswrapper[4958]: I1206 05:46:59.149370 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l65dd" event={"ID":"90c0ad5e-651b-41ae-9c98-851d6e70d94b","Type":"ContainerStarted","Data":"666cefa68c05a75b3ba355cc85aa079adef743768bbd46c0b22cd818e2baa97a"} Dec 06 05:47:00 crc kubenswrapper[4958]: I1206 05:47:00.160721 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l65dd" event={"ID":"90c0ad5e-651b-41ae-9c98-851d6e70d94b","Type":"ContainerStarted","Data":"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658"} Dec 06 05:47:00 crc kubenswrapper[4958]: I1206 05:47:00.175930 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l65dd" podStartSLOduration=2.139796934 podStartE2EDuration="3.175906748s" podCreationTimestamp="2025-12-06 05:46:57 +0000 UTC" firstStartedPulling="2025-12-06 05:46:58.346854219 +0000 UTC m=+1128.880625022" lastFinishedPulling="2025-12-06 05:46:59.382964073 +0000 UTC m=+1129.916734836" observedRunningTime="2025-12-06 05:47:00.175359023 +0000 UTC m=+1130.709129786" watchObservedRunningTime="2025-12-06 05:47:00.175906748 +0000 UTC m=+1130.709677521" Dec 06 05:47:00 crc kubenswrapper[4958]: I1206 05:47:00.983057 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.589464 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-qfcx8"] Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.590389 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.603973 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qfcx8"] Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.771202 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q82z\" (UniqueName: \"kubernetes.io/projected/0a035630-f39d-4094-ad86-117dc028950c-kube-api-access-9q82z\") pod \"openstack-operator-index-qfcx8\" (UID: \"0a035630-f39d-4094-ad86-117dc028950c\") " pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.872745 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q82z\" (UniqueName: \"kubernetes.io/projected/0a035630-f39d-4094-ad86-117dc028950c-kube-api-access-9q82z\") pod \"openstack-operator-index-qfcx8\" (UID: \"0a035630-f39d-4094-ad86-117dc028950c\") " pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.902962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q82z\" (UniqueName: \"kubernetes.io/projected/0a035630-f39d-4094-ad86-117dc028950c-kube-api-access-9q82z\") pod \"openstack-operator-index-qfcx8\" (UID: \"0a035630-f39d-4094-ad86-117dc028950c\") " pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:01 crc kubenswrapper[4958]: I1206 05:47:01.917659 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.177988 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-l65dd" podUID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" containerName="registry-server" containerID="cri-o://30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658" gracePeriod=2 Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.331843 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-qfcx8"] Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.501534 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.683229 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44mcv\" (UniqueName: \"kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv\") pod \"90c0ad5e-651b-41ae-9c98-851d6e70d94b\" (UID: \"90c0ad5e-651b-41ae-9c98-851d6e70d94b\") " Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.689039 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv" (OuterVolumeSpecName: "kube-api-access-44mcv") pod "90c0ad5e-651b-41ae-9c98-851d6e70d94b" (UID: "90c0ad5e-651b-41ae-9c98-851d6e70d94b"). InnerVolumeSpecName "kube-api-access-44mcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:47:02 crc kubenswrapper[4958]: I1206 05:47:02.785427 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44mcv\" (UniqueName: \"kubernetes.io/projected/90c0ad5e-651b-41ae-9c98-851d6e70d94b-kube-api-access-44mcv\") on node \"crc\" DevicePath \"\"" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.192920 4958 generic.go:334] "Generic (PLEG): container finished" podID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" containerID="30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658" exitCode=0 Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.192998 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l65dd" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.193002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l65dd" event={"ID":"90c0ad5e-651b-41ae-9c98-851d6e70d94b","Type":"ContainerDied","Data":"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658"} Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.193148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l65dd" event={"ID":"90c0ad5e-651b-41ae-9c98-851d6e70d94b","Type":"ContainerDied","Data":"666cefa68c05a75b3ba355cc85aa079adef743768bbd46c0b22cd818e2baa97a"} Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.193189 4958 scope.go:117] "RemoveContainer" containerID="30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.195938 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qfcx8" event={"ID":"0a035630-f39d-4094-ad86-117dc028950c","Type":"ContainerStarted","Data":"c599cadf9e7db7f019742681ba41e2bb1a90c60393cafbff677f58187394cb61"} Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.195981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-qfcx8" event={"ID":"0a035630-f39d-4094-ad86-117dc028950c","Type":"ContainerStarted","Data":"d1581e79259b55b834aaef15fed996375a6d8fc595ef4cc176115ba0d3008e14"} Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.209894 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-pvdb2" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.220699 4958 scope.go:117] "RemoveContainer" containerID="30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658" Dec 06 05:47:03 crc kubenswrapper[4958]: E1206 05:47:03.221460 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658\": container with ID starting with 30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658 not found: ID does not exist" containerID="30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.221529 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658"} err="failed to get container status \"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658\": rpc error: code = NotFound desc = could not find container \"30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658\": container with ID starting with 30e48a1a048f94c92bec451490ec8d29fb922c71a5dc7c63421aa02e4470e658 not found: ID does not exist" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.222506 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-qfcx8" podStartSLOduration=1.807142558 podStartE2EDuration="2.222493842s" podCreationTimestamp="2025-12-06 05:47:01 +0000 UTC" firstStartedPulling="2025-12-06 05:47:02.349373588 +0000 UTC m=+1132.883144361" lastFinishedPulling="2025-12-06 05:47:02.764724882 +0000 UTC m=+1133.298495645" observedRunningTime="2025-12-06 05:47:03.213357997 +0000 UTC m=+1133.747128760" watchObservedRunningTime="2025-12-06 05:47:03.222493842 +0000 UTC m=+1133.756264595" Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.272821 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.278378 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-l65dd"] Dec 06 05:47:03 crc kubenswrapper[4958]: I1206 05:47:03.769099 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" path="/var/lib/kubelet/pods/90c0ad5e-651b-41ae-9c98-851d6e70d94b/volumes" Dec 06 05:47:11 crc kubenswrapper[4958]: I1206 05:47:11.922340 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:11 crc kubenswrapper[4958]: I1206 05:47:11.923001 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:11 crc kubenswrapper[4958]: I1206 05:47:11.956151 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:12 crc kubenswrapper[4958]: I1206 05:47:12.279278 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-qfcx8" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.836030 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq"] Dec 06 05:47:13 crc kubenswrapper[4958]: E1206 05:47:13.836527 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" containerName="registry-server" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.836559 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" containerName="registry-server" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.836872 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="90c0ad5e-651b-41ae-9c98-851d6e70d94b" containerName="registry-server" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.839024 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.841159 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-hmc66" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.843736 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq"] Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.947585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.947690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkkx6\" (UniqueName: \"kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:13 crc kubenswrapper[4958]: I1206 05:47:13.947908 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.049312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.049425 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.049466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkkx6\" (UniqueName: \"kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.049993 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.050148 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.071147 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkkx6\" (UniqueName: \"kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6\") pod \"917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.162511 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:14 crc kubenswrapper[4958]: I1206 05:47:14.569678 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq"] Dec 06 05:47:15 crc kubenswrapper[4958]: I1206 05:47:15.276438 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" event={"ID":"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5","Type":"ContainerStarted","Data":"59078a76304d4a551703d2b8ffc100f7b0258f01d142a0026c99a2e6d52581c9"} Dec 06 05:47:17 crc kubenswrapper[4958]: I1206 05:47:17.290733 4958 generic.go:334] "Generic (PLEG): container finished" podID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerID="8f1aea979dbcf563391b048c7da57645ad131987df83d461ba16a4dc7bbdc3ca" exitCode=0 Dec 06 05:47:17 crc kubenswrapper[4958]: I1206 05:47:17.290846 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" event={"ID":"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5","Type":"ContainerDied","Data":"8f1aea979dbcf563391b048c7da57645ad131987df83d461ba16a4dc7bbdc3ca"} Dec 06 05:47:18 crc kubenswrapper[4958]: I1206 05:47:18.300056 4958 generic.go:334] "Generic (PLEG): container finished" podID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerID="242abc65805f488ce7a3eff9b6eed00ab54c00981e448988963898a9916270d2" exitCode=0 Dec 06 05:47:18 crc kubenswrapper[4958]: I1206 05:47:18.300145 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" event={"ID":"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5","Type":"ContainerDied","Data":"242abc65805f488ce7a3eff9b6eed00ab54c00981e448988963898a9916270d2"} Dec 06 05:47:19 crc kubenswrapper[4958]: I1206 05:47:19.311526 4958 generic.go:334] "Generic (PLEG): container finished" podID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerID="42a7d4bb0a1dbfdb6336518a91e07451f6b1cb1f4d0ece8e3fef8ee2fd0a9685" exitCode=0 Dec 06 05:47:19 crc kubenswrapper[4958]: I1206 05:47:19.311606 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" event={"ID":"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5","Type":"ContainerDied","Data":"42a7d4bb0a1dbfdb6336518a91e07451f6b1cb1f4d0ece8e3fef8ee2fd0a9685"} Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.577225 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.649323 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util\") pod \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.649429 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle\") pod \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.649460 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkkx6\" (UniqueName: \"kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6\") pod \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\" (UID: \"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5\") " Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.650163 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle" (OuterVolumeSpecName: "bundle") pod "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" (UID: "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.655317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6" (OuterVolumeSpecName: "kube-api-access-mkkx6") pod "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" (UID: "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5"). InnerVolumeSpecName "kube-api-access-mkkx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.662900 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util" (OuterVolumeSpecName: "util") pod "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" (UID: "ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.751065 4958 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-util\") on node \"crc\" DevicePath \"\"" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.751093 4958 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:47:20 crc kubenswrapper[4958]: I1206 05:47:20.751103 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkkx6\" (UniqueName: \"kubernetes.io/projected/ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5-kube-api-access-mkkx6\") on node \"crc\" DevicePath \"\"" Dec 06 05:47:21 crc kubenswrapper[4958]: I1206 05:47:21.326654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" event={"ID":"ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5","Type":"ContainerDied","Data":"59078a76304d4a551703d2b8ffc100f7b0258f01d142a0026c99a2e6d52581c9"} Dec 06 05:47:21 crc kubenswrapper[4958]: I1206 05:47:21.326691 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="59078a76304d4a551703d2b8ffc100f7b0258f01d142a0026c99a2e6d52581c9" Dec 06 05:47:21 crc kubenswrapper[4958]: I1206 05:47:21.326722 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.964122 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t"] Dec 06 05:47:26 crc kubenswrapper[4958]: E1206 05:47:26.964960 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="util" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.964976 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="util" Dec 06 05:47:26 crc kubenswrapper[4958]: E1206 05:47:26.965002 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="extract" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.965009 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="extract" Dec 06 05:47:26 crc kubenswrapper[4958]: E1206 05:47:26.965025 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="pull" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.965033 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="pull" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.965169 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5" containerName="extract" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.965681 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:26 crc kubenswrapper[4958]: I1206 05:47:26.971781 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-p9lrg" Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.001626 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t"] Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.032589 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zk2wq\" (UniqueName: \"kubernetes.io/projected/44ba97c3-c965-4c5a-b712-d9654788f04c-kube-api-access-zk2wq\") pod \"openstack-operator-controller-operator-55b6fb9447-2pt4t\" (UID: \"44ba97c3-c965-4c5a-b712-d9654788f04c\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.133442 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zk2wq\" (UniqueName: \"kubernetes.io/projected/44ba97c3-c965-4c5a-b712-d9654788f04c-kube-api-access-zk2wq\") pod \"openstack-operator-controller-operator-55b6fb9447-2pt4t\" (UID: \"44ba97c3-c965-4c5a-b712-d9654788f04c\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.153052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zk2wq\" (UniqueName: \"kubernetes.io/projected/44ba97c3-c965-4c5a-b712-d9654788f04c-kube-api-access-zk2wq\") pod \"openstack-operator-controller-operator-55b6fb9447-2pt4t\" (UID: \"44ba97c3-c965-4c5a-b712-d9654788f04c\") " pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.286307 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:27 crc kubenswrapper[4958]: I1206 05:47:27.554020 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t"] Dec 06 05:47:28 crc kubenswrapper[4958]: I1206 05:47:28.414803 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" event={"ID":"44ba97c3-c965-4c5a-b712-d9654788f04c","Type":"ContainerStarted","Data":"52fa30da01b638b072435b2e3f63ab9621f7673b1d1b78e017d883012afe3715"} Dec 06 05:47:32 crc kubenswrapper[4958]: I1206 05:47:32.445093 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" event={"ID":"44ba97c3-c965-4c5a-b712-d9654788f04c","Type":"ContainerStarted","Data":"5807d6497aa709f6beb22be9c15b0c328adf429a2f01d79babd7d1d3593253b5"} Dec 06 05:47:32 crc kubenswrapper[4958]: I1206 05:47:32.445673 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:32 crc kubenswrapper[4958]: I1206 05:47:32.471967 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" podStartSLOduration=2.333546749 podStartE2EDuration="6.47194726s" podCreationTimestamp="2025-12-06 05:47:26 +0000 UTC" firstStartedPulling="2025-12-06 05:47:27.571620095 +0000 UTC m=+1158.105390858" lastFinishedPulling="2025-12-06 05:47:31.710020606 +0000 UTC m=+1162.243791369" observedRunningTime="2025-12-06 05:47:32.4689838 +0000 UTC m=+1163.002754573" watchObservedRunningTime="2025-12-06 05:47:32.47194726 +0000 UTC m=+1163.005718023" Dec 06 05:47:37 crc kubenswrapper[4958]: I1206 05:47:37.290129 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-55b6fb9447-2pt4t" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.511126 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.512874 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:47:56 crc kubenswrapper[4958]: W1206 05:47:56.514955 4958 reflector.go:561] object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dvwts": failed to list *v1.Secret: secrets "barbican-operator-controller-manager-dockercfg-dvwts" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack-operators": no relationship found between node 'crc' and this object Dec 06 05:47:56 crc kubenswrapper[4958]: E1206 05:47:56.514997 4958 reflector.go:158] "Unhandled Error" err="object-\"openstack-operators\"/\"barbican-operator-controller-manager-dockercfg-dvwts\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"barbican-operator-controller-manager-dockercfg-dvwts\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack-operators\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.540698 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.541987 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.542083 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.545988 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-wczmk" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.551613 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.563087 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.564316 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.566880 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-mrvxh" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.574085 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.587594 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.588769 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.594837 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.595614 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-c7rws" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.656522 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.664410 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.674997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmv77\" (UniqueName: \"kubernetes.io/projected/1e47b3e1-d11a-4a15-8a92-24fe19661ee7-kube-api-access-mmv77\") pod \"barbican-operator-controller-manager-7d9dfd778-n2s2h\" (UID: \"1e47b3e1-d11a-4a15-8a92-24fe19661ee7\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.675080 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/a069026c-ab13-4593-9f99-71aa6fca2ecd-kube-api-access-vvs6k\") pod \"designate-operator-controller-manager-78b4bc895b-khhwt\" (UID: \"a069026c-ab13-4593-9f99-71aa6fca2ecd\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.675121 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssw7q\" (UniqueName: \"kubernetes.io/projected/ddaab3a6-6e32-480b-8ba8-3852feb6440f-kube-api-access-ssw7q\") pod \"glance-operator-controller-manager-77987cd8cd-bg69j\" (UID: \"ddaab3a6-6e32-480b-8ba8-3852feb6440f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.675146 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzs89\" (UniqueName: \"kubernetes.io/projected/6a934eab-3341-4e53-8317-eca91e0e9710-kube-api-access-hzs89\") pod \"cinder-operator-controller-manager-859b6ccc6-kfk67\" (UID: \"6a934eab-3341-4e53-8317-eca91e0e9710\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.677049 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-47rwx" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.696514 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.711408 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.712771 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.715557 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.716314 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.717752 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.719406 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jp6cl" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.719412 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-grfx7" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.726551 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.745037 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.745952 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.751118 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-hmfbg" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.760498 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.772629 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.773877 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.776463 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gw7nw" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778123 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sggll\" (UniqueName: \"kubernetes.io/projected/ebcc81d0-3595-480b-a886-1ec0e5da638d-kube-api-access-sggll\") pod \"horizon-operator-controller-manager-68c6d99b8f-gnstm\" (UID: \"ebcc81d0-3595-480b-a886-1ec0e5da638d\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778163 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssw7q\" (UniqueName: \"kubernetes.io/projected/ddaab3a6-6e32-480b-8ba8-3852feb6440f-kube-api-access-ssw7q\") pod \"glance-operator-controller-manager-77987cd8cd-bg69j\" (UID: \"ddaab3a6-6e32-480b-8ba8-3852feb6440f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778190 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzs89\" (UniqueName: \"kubernetes.io/projected/6a934eab-3341-4e53-8317-eca91e0e9710-kube-api-access-hzs89\") pod \"cinder-operator-controller-manager-859b6ccc6-kfk67\" (UID: \"6a934eab-3341-4e53-8317-eca91e0e9710\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778215 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxlhx\" (UniqueName: \"kubernetes.io/projected/17c0b87a-3ce4-434e-bbbc-cf06bd3c2833-kube-api-access-cxlhx\") pod \"heat-operator-controller-manager-5f64f6f8bb-v2ktl\" (UID: \"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grgl7\" (UniqueName: \"kubernetes.io/projected/d0b85019-5501-4f67-a136-f7798be67039-kube-api-access-grgl7\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778258 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmv77\" (UniqueName: \"kubernetes.io/projected/1e47b3e1-d11a-4a15-8a92-24fe19661ee7-kube-api-access-mmv77\") pod \"barbican-operator-controller-manager-7d9dfd778-n2s2h\" (UID: \"1e47b3e1-d11a-4a15-8a92-24fe19661ee7\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778447 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778762 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqhgq\" (UniqueName: \"kubernetes.io/projected/15c5254b-bcb1-45a3-a94b-21995bd4a143-kube-api-access-pqhgq\") pod \"ironic-operator-controller-manager-6c548fd776-hx8tj\" (UID: \"15c5254b-bcb1-45a3-a94b-21995bd4a143\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.778801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/a069026c-ab13-4593-9f99-71aa6fca2ecd-kube-api-access-vvs6k\") pod \"designate-operator-controller-manager-78b4bc895b-khhwt\" (UID: \"a069026c-ab13-4593-9f99-71aa6fca2ecd\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.785492 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.793559 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.808638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssw7q\" (UniqueName: \"kubernetes.io/projected/ddaab3a6-6e32-480b-8ba8-3852feb6440f-kube-api-access-ssw7q\") pod \"glance-operator-controller-manager-77987cd8cd-bg69j\" (UID: \"ddaab3a6-6e32-480b-8ba8-3852feb6440f\") " pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.814203 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzs89\" (UniqueName: \"kubernetes.io/projected/6a934eab-3341-4e53-8317-eca91e0e9710-kube-api-access-hzs89\") pod \"cinder-operator-controller-manager-859b6ccc6-kfk67\" (UID: \"6a934eab-3341-4e53-8317-eca91e0e9710\") " pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.814278 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.814589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmv77\" (UniqueName: \"kubernetes.io/projected/1e47b3e1-d11a-4a15-8a92-24fe19661ee7-kube-api-access-mmv77\") pod \"barbican-operator-controller-manager-7d9dfd778-n2s2h\" (UID: \"1e47b3e1-d11a-4a15-8a92-24fe19661ee7\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.815617 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.822802 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v5zlw" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.823624 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.824830 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.826998 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-gbdtp" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.832247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvs6k\" (UniqueName: \"kubernetes.io/projected/a069026c-ab13-4593-9f99-71aa6fca2ecd-kube-api-access-vvs6k\") pod \"designate-operator-controller-manager-78b4bc895b-khhwt\" (UID: \"a069026c-ab13-4593-9f99-71aa6fca2ecd\") " pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.853421 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.854384 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.856226 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.861686 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-f88n9" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.866650 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqhgq\" (UniqueName: \"kubernetes.io/projected/15c5254b-bcb1-45a3-a94b-21995bd4a143-kube-api-access-pqhgq\") pod \"ironic-operator-controller-manager-6c548fd776-hx8tj\" (UID: \"15c5254b-bcb1-45a3-a94b-21995bd4a143\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880243 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gj7x\" (UniqueName: \"kubernetes.io/projected/137f3e2e-f835-48ca-873c-41fe38a6d7f2-kube-api-access-6gj7x\") pod \"keystone-operator-controller-manager-7765d96ddf-sq7fz\" (UID: \"137f3e2e-f835-48ca-873c-41fe38a6d7f2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880285 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sggll\" (UniqueName: \"kubernetes.io/projected/ebcc81d0-3595-480b-a886-1ec0e5da638d-kube-api-access-sggll\") pod \"horizon-operator-controller-manager-68c6d99b8f-gnstm\" (UID: \"ebcc81d0-3595-480b-a886-1ec0e5da638d\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880361 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxlhx\" (UniqueName: \"kubernetes.io/projected/17c0b87a-3ce4-434e-bbbc-cf06bd3c2833-kube-api-access-cxlhx\") pod \"heat-operator-controller-manager-5f64f6f8bb-v2ktl\" (UID: \"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpvdg\" (UniqueName: \"kubernetes.io/projected/6f35dbd3-cf3b-46f8-83cf-911ea6a88679-kube-api-access-mpvdg\") pod \"manila-operator-controller-manager-7c79b5df47-6vcd7\" (UID: \"6f35dbd3-cf3b-46f8-83cf-911ea6a88679\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grgl7\" (UniqueName: \"kubernetes.io/projected/d0b85019-5501-4f67-a136-f7798be67039-kube-api-access-grgl7\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q26tp\" (UniqueName: \"kubernetes.io/projected/1a7e7d3c-d935-469c-8296-658d9b8542dc-kube-api-access-q26tp\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7745k\" (UID: \"1a7e7d3c-d935-469c-8296-658d9b8542dc\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.880488 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: E1206 05:47:56.880592 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:56 crc kubenswrapper[4958]: E1206 05:47:56.880638 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert podName:d0b85019-5501-4f67-a136-f7798be67039 nodeName:}" failed. No retries permitted until 2025-12-06 05:47:57.380623216 +0000 UTC m=+1187.914393979 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert") pod "infra-operator-controller-manager-57548d458d-wv9rd" (UID: "d0b85019-5501-4f67-a136-f7798be67039") : secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.887561 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.912712 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxlhx\" (UniqueName: \"kubernetes.io/projected/17c0b87a-3ce4-434e-bbbc-cf06bd3c2833-kube-api-access-cxlhx\") pod \"heat-operator-controller-manager-5f64f6f8bb-v2ktl\" (UID: \"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.913061 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.913194 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grgl7\" (UniqueName: \"kubernetes.io/projected/d0b85019-5501-4f67-a136-f7798be67039-kube-api-access-grgl7\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.914855 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqhgq\" (UniqueName: \"kubernetes.io/projected/15c5254b-bcb1-45a3-a94b-21995bd4a143-kube-api-access-pqhgq\") pod \"ironic-operator-controller-manager-6c548fd776-hx8tj\" (UID: \"15c5254b-bcb1-45a3-a94b-21995bd4a143\") " pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.922857 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sggll\" (UniqueName: \"kubernetes.io/projected/ebcc81d0-3595-480b-a886-1ec0e5da638d-kube-api-access-sggll\") pod \"horizon-operator-controller-manager-68c6d99b8f-gnstm\" (UID: \"ebcc81d0-3595-480b-a886-1ec0e5da638d\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.926261 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.928140 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.929325 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.933390 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-mvnzp" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.933625 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-26667"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.934627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.937786 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jr6l5" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.938677 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.953767 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.959250 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-26667"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.968860 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.969935 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.972639 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-654tb"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.974111 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.976978 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.977257 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-lxcmf" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.977703 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zsmkn" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.980029 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981138 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981430 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkxbk\" (UniqueName: \"kubernetes.io/projected/d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d-kube-api-access-nkxbk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-h2s2k\" (UID: \"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981503 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fm6k\" (UniqueName: \"kubernetes.io/projected/d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c-kube-api-access-7fm6k\") pod \"nova-operator-controller-manager-697bc559fc-jk9n8\" (UID: \"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mpvdg\" (UniqueName: \"kubernetes.io/projected/6f35dbd3-cf3b-46f8-83cf-911ea6a88679-kube-api-access-mpvdg\") pod \"manila-operator-controller-manager-7c79b5df47-6vcd7\" (UID: \"6f35dbd3-cf3b-46f8-83cf-911ea6a88679\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981621 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9f8f\" (UniqueName: \"kubernetes.io/projected/f3cf2219-d4b6-43cd-8ace-2852c808fe6e-kube-api-access-q9f8f\") pod \"octavia-operator-controller-manager-998648c74-26667\" (UID: \"f3cf2219-d4b6-43cd-8ace-2852c808fe6e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q26tp\" (UniqueName: \"kubernetes.io/projected/1a7e7d3c-d935-469c-8296-658d9b8542dc-kube-api-access-q26tp\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7745k\" (UID: \"1a7e7d3c-d935-469c-8296-658d9b8542dc\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.981741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gj7x\" (UniqueName: \"kubernetes.io/projected/137f3e2e-f835-48ca-873c-41fe38a6d7f2-kube-api-access-6gj7x\") pod \"keystone-operator-controller-manager-7765d96ddf-sq7fz\" (UID: \"137f3e2e-f835-48ca-873c-41fe38a6d7f2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.985690 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-srmsk" Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.989820 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.994070 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk"] Dec 06 05:47:56 crc kubenswrapper[4958]: I1206 05:47:56.994365 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.006375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gj7x\" (UniqueName: \"kubernetes.io/projected/137f3e2e-f835-48ca-873c-41fe38a6d7f2-kube-api-access-6gj7x\") pod \"keystone-operator-controller-manager-7765d96ddf-sq7fz\" (UID: \"137f3e2e-f835-48ca-873c-41fe38a6d7f2\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.007126 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q26tp\" (UniqueName: \"kubernetes.io/projected/1a7e7d3c-d935-469c-8296-658d9b8542dc-kube-api-access-q26tp\") pod \"mariadb-operator-controller-manager-56bbcc9d85-7745k\" (UID: \"1a7e7d3c-d935-469c-8296-658d9b8542dc\") " pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.016257 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.017391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.019156 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpvdg\" (UniqueName: \"kubernetes.io/projected/6f35dbd3-cf3b-46f8-83cf-911ea6a88679-kube-api-access-mpvdg\") pod \"manila-operator-controller-manager-7c79b5df47-6vcd7\" (UID: \"6f35dbd3-cf3b-46f8-83cf-911ea6a88679\") " pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.026233 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-654tb"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.032244 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-74vnt" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.036399 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.066649 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082735 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkxbk\" (UniqueName: \"kubernetes.io/projected/d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d-kube-api-access-nkxbk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-h2s2k\" (UID: \"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082777 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ssvr\" (UniqueName: \"kubernetes.io/projected/c6c9cc05-d00f-4f92-bd7e-13737952085b-kube-api-access-7ssvr\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082799 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhd6t\" (UniqueName: \"kubernetes.io/projected/f3792985-93a5-4b81-8ea2-ca63d1f659d8-kube-api-access-lhd6t\") pod \"placement-operator-controller-manager-78f8948974-654tb\" (UID: \"f3792985-93a5-4b81-8ea2-ca63d1f659d8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082841 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fm6k\" (UniqueName: \"kubernetes.io/projected/d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c-kube-api-access-7fm6k\") pod \"nova-operator-controller-manager-697bc559fc-jk9n8\" (UID: \"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082886 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm25v\" (UniqueName: \"kubernetes.io/projected/26415490-b3ca-4822-93b9-f7fb5efcf375-kube-api-access-xm25v\") pod \"ovn-operator-controller-manager-b6456fdb6-dn98s\" (UID: \"26415490-b3ca-4822-93b9-f7fb5efcf375\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082913 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9f8f\" (UniqueName: \"kubernetes.io/projected/f3cf2219-d4b6-43cd-8ace-2852c808fe6e-kube-api-access-q9f8f\") pod \"octavia-operator-controller-manager-998648c74-26667\" (UID: \"f3cf2219-d4b6-43cd-8ace-2852c808fe6e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.082958 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w65bh\" (UniqueName: \"kubernetes.io/projected/930d98f5-bc89-466b-9876-ee5764f146f4-kube-api-access-w65bh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nkrvh\" (UID: \"930d98f5-bc89-466b-9876-ee5764f146f4\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.084068 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.095458 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.097252 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.098641 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.099729 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-ptpsg" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.116685 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.137776 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fm6k\" (UniqueName: \"kubernetes.io/projected/d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c-kube-api-access-7fm6k\") pod \"nova-operator-controller-manager-697bc559fc-jk9n8\" (UID: \"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.138206 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9f8f\" (UniqueName: \"kubernetes.io/projected/f3cf2219-d4b6-43cd-8ace-2852c808fe6e-kube-api-access-q9f8f\") pod \"octavia-operator-controller-manager-998648c74-26667\" (UID: \"f3cf2219-d4b6-43cd-8ace-2852c808fe6e\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.139222 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkxbk\" (UniqueName: \"kubernetes.io/projected/d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d-kube-api-access-nkxbk\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-h2s2k\" (UID: \"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.184375 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ssvr\" (UniqueName: \"kubernetes.io/projected/c6c9cc05-d00f-4f92-bd7e-13737952085b-kube-api-access-7ssvr\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.184750 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhd6t\" (UniqueName: \"kubernetes.io/projected/f3792985-93a5-4b81-8ea2-ca63d1f659d8-kube-api-access-lhd6t\") pod \"placement-operator-controller-manager-78f8948974-654tb\" (UID: \"f3792985-93a5-4b81-8ea2-ca63d1f659d8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.187661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.187814 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.187883 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert podName:c6c9cc05-d00f-4f92-bd7e-13737952085b nodeName:}" failed. No retries permitted until 2025-12-06 05:47:57.687866078 +0000 UTC m=+1188.221636841 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" (UID: "c6c9cc05-d00f-4f92-bd7e-13737952085b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.188037 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm25v\" (UniqueName: \"kubernetes.io/projected/26415490-b3ca-4822-93b9-f7fb5efcf375-kube-api-access-xm25v\") pod \"ovn-operator-controller-manager-b6456fdb6-dn98s\" (UID: \"26415490-b3ca-4822-93b9-f7fb5efcf375\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.188203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w65bh\" (UniqueName: \"kubernetes.io/projected/930d98f5-bc89-466b-9876-ee5764f146f4-kube-api-access-w65bh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nkrvh\" (UID: \"930d98f5-bc89-466b-9876-ee5764f146f4\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.188319 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc2nf\" (UniqueName: \"kubernetes.io/projected/e6abb542-4bf4-4edf-b150-de3f6200e4de-kube-api-access-mc2nf\") pod \"telemetry-operator-controller-manager-76cc84c6bb-ptqzr\" (UID: \"e6abb542-4bf4-4edf-b150-de3f6200e4de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.200301 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.201379 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.201458 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.203277 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-xl59r" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.211001 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhd6t\" (UniqueName: \"kubernetes.io/projected/f3792985-93a5-4b81-8ea2-ca63d1f659d8-kube-api-access-lhd6t\") pod \"placement-operator-controller-manager-78f8948974-654tb\" (UID: \"f3792985-93a5-4b81-8ea2-ca63d1f659d8\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.219266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm25v\" (UniqueName: \"kubernetes.io/projected/26415490-b3ca-4822-93b9-f7fb5efcf375-kube-api-access-xm25v\") pod \"ovn-operator-controller-manager-b6456fdb6-dn98s\" (UID: \"26415490-b3ca-4822-93b9-f7fb5efcf375\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.219712 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ssvr\" (UniqueName: \"kubernetes.io/projected/c6c9cc05-d00f-4f92-bd7e-13737952085b-kube-api-access-7ssvr\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.221765 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w65bh\" (UniqueName: \"kubernetes.io/projected/930d98f5-bc89-466b-9876-ee5764f146f4-kube-api-access-w65bh\") pod \"swift-operator-controller-manager-5f8c65bbfc-nkrvh\" (UID: \"930d98f5-bc89-466b-9876-ee5764f146f4\") " pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.237528 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.239001 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.241900 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-s48kg" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.250183 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.260790 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.273263 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.274816 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.276179 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.278517 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7xmm4" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.278631 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.278686 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.289363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc2nf\" (UniqueName: \"kubernetes.io/projected/e6abb542-4bf4-4edf-b150-de3f6200e4de-kube-api-access-mc2nf\") pod \"telemetry-operator-controller-manager-76cc84c6bb-ptqzr\" (UID: \"e6abb542-4bf4-4edf-b150-de3f6200e4de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.289431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85rrj\" (UniqueName: \"kubernetes.io/projected/621f367e-fe95-4fb5-9ce7-966981c7b13a-kube-api-access-85rrj\") pod \"watcher-operator-controller-manager-769dc69bc-vwbhb\" (UID: \"621f367e-fe95-4fb5-9ce7-966981c7b13a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.289515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdxp5\" (UniqueName: \"kubernetes.io/projected/5c330abd-909c-44eb-a7ff-7cb5398fd736-kube-api-access-hdxp5\") pod \"test-operator-controller-manager-5854674fcc-vm7kq\" (UID: \"5c330abd-909c-44eb-a7ff-7cb5398fd736\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.303694 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.312770 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.320215 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc2nf\" (UniqueName: \"kubernetes.io/projected/e6abb542-4bf4-4edf-b150-de3f6200e4de-kube-api-access-mc2nf\") pod \"telemetry-operator-controller-manager-76cc84c6bb-ptqzr\" (UID: \"e6abb542-4bf4-4edf-b150-de3f6200e4de\") " pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.332625 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.341891 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.342133 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-dvwts" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.342958 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.343545 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.344045 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r4zbk" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.349120 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.371300 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390153 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hdxp5\" (UniqueName: \"kubernetes.io/projected/5c330abd-909c-44eb-a7ff-7cb5398fd736-kube-api-access-hdxp5\") pod \"test-operator-controller-manager-5854674fcc-vm7kq\" (UID: \"5c330abd-909c-44eb-a7ff-7cb5398fd736\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jz2z\" (UniqueName: \"kubernetes.io/projected/7de555d5-964f-4e85-a2b9-5dddf37e097e-kube-api-access-4jz2z\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390298 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390331 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkzxp\" (UniqueName: \"kubernetes.io/projected/59dc6399-d2b4-437a-9521-4096ed7e924f-kube-api-access-hkzxp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jsjtj\" (UID: \"59dc6399-d2b4-437a-9521-4096ed7e924f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85rrj\" (UniqueName: \"kubernetes.io/projected/621f367e-fe95-4fb5-9ce7-966981c7b13a-kube-api-access-85rrj\") pod \"watcher-operator-controller-manager-769dc69bc-vwbhb\" (UID: \"621f367e-fe95-4fb5-9ce7-966981c7b13a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.390431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.390907 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.390958 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert podName:d0b85019-5501-4f67-a136-f7798be67039 nodeName:}" failed. No retries permitted until 2025-12-06 05:47:58.390938949 +0000 UTC m=+1188.924709712 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert") pod "infra-operator-controller-manager-57548d458d-wv9rd" (UID: "d0b85019-5501-4f67-a136-f7798be67039") : secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.396621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.415099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hdxp5\" (UniqueName: \"kubernetes.io/projected/5c330abd-909c-44eb-a7ff-7cb5398fd736-kube-api-access-hdxp5\") pod \"test-operator-controller-manager-5854674fcc-vm7kq\" (UID: \"5c330abd-909c-44eb-a7ff-7cb5398fd736\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.421577 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.426083 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85rrj\" (UniqueName: \"kubernetes.io/projected/621f367e-fe95-4fb5-9ce7-966981c7b13a-kube-api-access-85rrj\") pod \"watcher-operator-controller-manager-769dc69bc-vwbhb\" (UID: \"621f367e-fe95-4fb5-9ce7-966981c7b13a\") " pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.440370 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.507240 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jz2z\" (UniqueName: \"kubernetes.io/projected/7de555d5-964f-4e85-a2b9-5dddf37e097e-kube-api-access-4jz2z\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.507604 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkzxp\" (UniqueName: \"kubernetes.io/projected/59dc6399-d2b4-437a-9521-4096ed7e924f-kube-api-access-hkzxp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jsjtj\" (UID: \"59dc6399-d2b4-437a-9521-4096ed7e924f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.507831 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.507882 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:47:58.007865173 +0000 UTC m=+1188.541635936 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.507674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.508217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.508289 4958 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.508311 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:47:58.008303794 +0000 UTC m=+1188.542074557 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "metrics-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.534272 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkzxp\" (UniqueName: \"kubernetes.io/projected/59dc6399-d2b4-437a-9521-4096ed7e924f-kube-api-access-hkzxp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-jsjtj\" (UID: \"59dc6399-d2b4-437a-9521-4096ed7e924f\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.539638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jz2z\" (UniqueName: \"kubernetes.io/projected/7de555d5-964f-4e85-a2b9-5dddf37e097e-kube-api-access-4jz2z\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.561788 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.653457 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.664803 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.695819 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.713838 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.713983 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: E1206 05:47:57.714026 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert podName:c6c9cc05-d00f-4f92-bd7e-13737952085b nodeName:}" failed. No retries permitted until 2025-12-06 05:47:58.714014036 +0000 UTC m=+1189.247784799 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" (UID: "c6c9cc05-d00f-4f92-bd7e-13737952085b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.746245 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.814365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.814405 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl"] Dec 06 05:47:57 crc kubenswrapper[4958]: I1206 05:47:57.814418 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.019979 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.020395 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.020608 4958 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.020653 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:47:59.020638012 +0000 UTC m=+1189.554408775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "metrics-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.020972 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.020996 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:47:59.020989021 +0000 UTC m=+1189.554759784 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.092211 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.308633 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k"] Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.315139 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd42b2cbe_9f6a_4d29_bb05_0588e5e4cf8d.slice/crio-c015168b6cdc90a583a03a31c8b537c6ed6b25b6003d83241dea1a67572ceea9 WatchSource:0}: Error finding container c015168b6cdc90a583a03a31c8b537c6ed6b25b6003d83241dea1a67572ceea9: Status 404 returned error can't find the container with id c015168b6cdc90a583a03a31c8b537c6ed6b25b6003d83241dea1a67572ceea9 Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.340895 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7"] Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.345666 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e47b3e1_d11a_4a15_8a92_24fe19661ee7.slice/crio-bc230de4986b956af68cfe1f4ce9d2b7d42e5e8c5e3f56aba716fa2bf2ab30de WatchSource:0}: Error finding container bc230de4986b956af68cfe1f4ce9d2b7d42e5e8c5e3f56aba716fa2bf2ab30de: Status 404 returned error can't find the container with id bc230de4986b956af68cfe1f4ce9d2b7d42e5e8c5e3f56aba716fa2bf2ab30de Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.353525 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.363019 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.398356 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.428606 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.428769 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.428870 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert podName:d0b85019-5501-4f67-a136-f7798be67039 nodeName:}" failed. No retries permitted until 2025-12-06 05:48:00.42883993 +0000 UTC m=+1190.962610763 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert") pod "infra-operator-controller-manager-57548d458d-wv9rd" (UID: "d0b85019-5501-4f67-a136-f7798be67039") : secret "infra-operator-webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.518629 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebcc81d0_3595_480b_a886_1ec0e5da638d.slice/crio-d5b1e863a03cad97fc98d486338f3a174a5903b93f584a8c80112b1c901d6ecd WatchSource:0}: Error finding container d5b1e863a03cad97fc98d486338f3a174a5903b93f584a8c80112b1c901d6ecd: Status 404 returned error can't find the container with id d5b1e863a03cad97fc98d486338f3a174a5903b93f584a8c80112b1c901d6ecd Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.534289 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.544701 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.553550 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-26667"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.559227 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-654tb"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.611096 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" event={"ID":"1e47b3e1-d11a-4a15-8a92-24fe19661ee7","Type":"ContainerStarted","Data":"bc230de4986b956af68cfe1f4ce9d2b7d42e5e8c5e3f56aba716fa2bf2ab30de"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.614674 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" event={"ID":"ddaab3a6-6e32-480b-8ba8-3852feb6440f","Type":"ContainerStarted","Data":"71f5b1a7e66c1ac17eb47f5c00b54183728b6a1bab079471e4f883494bb7f068"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.618071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" event={"ID":"a069026c-ab13-4593-9f99-71aa6fca2ecd","Type":"ContainerStarted","Data":"36fca5912595e07605ddfbfce615cf569066d7d12015b6297eb5b981915519a1"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.620781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" event={"ID":"137f3e2e-f835-48ca-873c-41fe38a6d7f2","Type":"ContainerStarted","Data":"dd329954ab8b5be14d751c97022d8bf5ebcb285e02e5cca9392c542bc5a1a5f8"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.626756 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" event={"ID":"6a934eab-3341-4e53-8317-eca91e0e9710","Type":"ContainerStarted","Data":"56f012540e58fa028bc98ecf7e0ad45ed2ade8aa56cce27575d1e8800b951286"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.631124 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" event={"ID":"f3792985-93a5-4b81-8ea2-ca63d1f659d8","Type":"ContainerStarted","Data":"92ed6e03926fa5b39df825ec5b9e12b23b1cfbcd2c8aa24674af38be177a6e64"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.632306 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" event={"ID":"ebcc81d0-3595-480b-a886-1ec0e5da638d","Type":"ContainerStarted","Data":"d5b1e863a03cad97fc98d486338f3a174a5903b93f584a8c80112b1c901d6ecd"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.633255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" event={"ID":"1a7e7d3c-d935-469c-8296-658d9b8542dc","Type":"ContainerStarted","Data":"1f04d32b8a5f5381fd9819dec83050264632d36388fdf2baf482e7559a4e1c0d"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.634549 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" event={"ID":"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c","Type":"ContainerStarted","Data":"ae7fa56c738b84957b02ed27c200218e8dcb7a32fa6f757f20640272aaf4aca4"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.635564 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" event={"ID":"15c5254b-bcb1-45a3-a94b-21995bd4a143","Type":"ContainerStarted","Data":"cac3476771c626fc111559289eaa35d7e9f9a3918c7c1c0f1439bfd9249f7551"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.636834 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" event={"ID":"f3cf2219-d4b6-43cd-8ace-2852c808fe6e","Type":"ContainerStarted","Data":"d94b0a20f75bdd0b4045d07d4a77e1abcf982a5c0fcbc4277454d9a15d3fcf11"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.639563 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" event={"ID":"6f35dbd3-cf3b-46f8-83cf-911ea6a88679","Type":"ContainerStarted","Data":"04b63da5a1df5e2ea8fad9c8abaf41488607578d8ddad960401e192505565a42"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.640510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" event={"ID":"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833","Type":"ContainerStarted","Data":"2804b41c6ecf8ffc29b379910e5106c540579cb92e2b85327b34f89ec101b44c"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.641525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" event={"ID":"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d","Type":"ContainerStarted","Data":"c015168b6cdc90a583a03a31c8b537c6ed6b25b6003d83241dea1a67572ceea9"} Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.642327 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh"] Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.648632 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod26415490_b3ca_4822_93b9_f7fb5efcf375.slice/crio-209d5e6c40b78308f07ecc31b8a0cdb161b8ac053956cb933c8725441e884f49 WatchSource:0}: Error finding container 209d5e6c40b78308f07ecc31b8a0cdb161b8ac053956cb933c8725441e884f49: Status 404 returned error can't find the container with id 209d5e6c40b78308f07ecc31b8a0cdb161b8ac053956cb933c8725441e884f49 Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.652268 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm25v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-dn98s_openstack-operators(26415490-b3ca-4822-93b9-f7fb5efcf375): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.654160 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xm25v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-dn98s_openstack-operators(26415490-b3ca-4822-93b9-f7fb5efcf375): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.656255 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" podUID="26415490-b3ca-4822-93b9-f7fb5efcf375" Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.662503 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c330abd_909c_44eb_a7ff_7cb5398fd736.slice/crio-7add90e2abeb34819c9d9ef16dc0c8f432e86ffd1904cd9c09ea166fd04c5044 WatchSource:0}: Error finding container 7add90e2abeb34819c9d9ef16dc0c8f432e86ffd1904cd9c09ea166fd04c5044: Status 404 returned error can't find the container with id 7add90e2abeb34819c9d9ef16dc0c8f432e86ffd1904cd9c09ea166fd04c5044 Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.663112 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode6abb542_4bf4_4edf_b150_de3f6200e4de.slice/crio-909672774cabbf3e74c5f9b8194c57371bfa9d947f9353eec8e4f80c919979c6 WatchSource:0}: Error finding container 909672774cabbf3e74c5f9b8194c57371bfa9d947f9353eec8e4f80c919979c6: Status 404 returned error can't find the container with id 909672774cabbf3e74c5f9b8194c57371bfa9d947f9353eec8e4f80c919979c6 Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.663198 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w65bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-nkrvh_openstack-operators(930d98f5-bc89-466b-9876-ee5764f146f4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.664699 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mc2nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-ptqzr_openstack-operators(e6abb542-4bf4-4edf-b150-de3f6200e4de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.664871 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-w65bh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-5f8c65bbfc-nkrvh_openstack-operators(930d98f5-bc89-466b-9876-ee5764f146f4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.666095 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" podUID="930d98f5-bc89-466b-9876-ee5764f146f4" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.666179 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdxp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vm7kq_openstack-operators(5c330abd-909c-44eb-a7ff-7cb5398fd736): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.666542 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s"] Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.667390 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mc2nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-76cc84c6bb-ptqzr_openstack-operators(e6abb542-4bf4-4edf-b150-de3f6200e4de): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.668527 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" podUID="e6abb542-4bf4-4edf-b150-de3f6200e4de" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.670065 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hdxp5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-vm7kq_openstack-operators(5c330abd-909c-44eb-a7ff-7cb5398fd736): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.670874 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkzxp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-jsjtj_openstack-operators(59dc6399-d2b4-437a-9521-4096ed7e924f): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.671675 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" podUID="5c330abd-909c-44eb-a7ff-7cb5398fd736" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.672084 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" podUID="59dc6399-d2b4-437a-9521-4096ed7e924f" Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.673629 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.679116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.683262 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj"] Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.731811 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.731991 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: E1206 05:47:58.732058 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert podName:c6c9cc05-d00f-4f92-bd7e-13737952085b nodeName:}" failed. No retries permitted until 2025-12-06 05:48:00.732037413 +0000 UTC m=+1191.265808176 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" (UID: "c6c9cc05-d00f-4f92-bd7e-13737952085b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:47:58 crc kubenswrapper[4958]: I1206 05:47:58.794855 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb"] Dec 06 05:47:58 crc kubenswrapper[4958]: W1206 05:47:58.798176 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod621f367e_fe95_4fb5_9ce7_966981c7b13a.slice/crio-e9e04adccad94fcf2514d84c3f96f48140dc990e881f7025e879eaf47b4261ad WatchSource:0}: Error finding container e9e04adccad94fcf2514d84c3f96f48140dc990e881f7025e879eaf47b4261ad: Status 404 returned error can't find the container with id e9e04adccad94fcf2514d84c3f96f48140dc990e881f7025e879eaf47b4261ad Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.036486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.036543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.036740 4958 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.036766 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.036804 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:01.036786729 +0000 UTC m=+1191.570557502 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "metrics-server-cert" not found Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.036878 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:01.036849351 +0000 UTC m=+1191.570620154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "webhook-server-cert" not found Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.650867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" event={"ID":"59dc6399-d2b4-437a-9521-4096ed7e924f","Type":"ContainerStarted","Data":"68ca0cbfd6ffb0c9ae94da708570eadc324cadd56c2cc7e9dd6bdc34efd899f1"} Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.654147 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" podUID="59dc6399-d2b4-437a-9521-4096ed7e924f" Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.655211 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" event={"ID":"26415490-b3ca-4822-93b9-f7fb5efcf375","Type":"ContainerStarted","Data":"209d5e6c40b78308f07ecc31b8a0cdb161b8ac053956cb933c8725441e884f49"} Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.656499 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" event={"ID":"5c330abd-909c-44eb-a7ff-7cb5398fd736","Type":"ContainerStarted","Data":"7add90e2abeb34819c9d9ef16dc0c8f432e86ffd1904cd9c09ea166fd04c5044"} Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.657551 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" podUID="26415490-b3ca-4822-93b9-f7fb5efcf375" Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.657826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" event={"ID":"e6abb542-4bf4-4edf-b150-de3f6200e4de","Type":"ContainerStarted","Data":"909672774cabbf3e74c5f9b8194c57371bfa9d947f9353eec8e4f80c919979c6"} Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.659502 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" podUID="5c330abd-909c-44eb-a7ff-7cb5398fd736" Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.659702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" event={"ID":"930d98f5-bc89-466b-9876-ee5764f146f4","Type":"ContainerStarted","Data":"f032ebbe7c2606f32a5a343702e4f6e4609085025b4546a48464bcaa6ab96d16"} Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.660254 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" podUID="e6abb542-4bf4-4edf-b150-de3f6200e4de" Dec 06 05:47:59 crc kubenswrapper[4958]: I1206 05:47:59.661606 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" event={"ID":"621f367e-fe95-4fb5-9ce7-966981c7b13a","Type":"ContainerStarted","Data":"e9e04adccad94fcf2514d84c3f96f48140dc990e881f7025e879eaf47b4261ad"} Dec 06 05:47:59 crc kubenswrapper[4958]: E1206 05:47:59.661649 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" podUID="930d98f5-bc89-466b-9876-ee5764f146f4" Dec 06 05:48:00 crc kubenswrapper[4958]: I1206 05:48:00.464450 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.464636 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.464715 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert podName:d0b85019-5501-4f67-a136-f7798be67039 nodeName:}" failed. No retries permitted until 2025-12-06 05:48:04.464675248 +0000 UTC m=+1194.998446011 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert") pod "infra-operator-controller-manager-57548d458d-wv9rd" (UID: "d0b85019-5501-4f67-a136-f7798be67039") : secret "infra-operator-webhook-server-cert" not found Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.671028 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" podUID="59dc6399-d2b4-437a-9521-4096ed7e924f" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.684990 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:7d66757c0af67104f0389e851a7cc0daa44443ad202d157417bd86bbb57cc385\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" podUID="e6abb542-4bf4-4edf-b150-de3f6200e4de" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.685056 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:2a3d21728a8bfb4e64617e63e61e2d1cb70a383ea3e8f846e0c3c3c02d2b0a9d\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" podUID="930d98f5-bc89-466b-9876-ee5764f146f4" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.685094 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" podUID="26415490-b3ca-4822-93b9-f7fb5efcf375" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.685127 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" podUID="5c330abd-909c-44eb-a7ff-7cb5398fd736" Dec 06 05:48:00 crc kubenswrapper[4958]: I1206 05:48:00.769264 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.769404 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:48:00 crc kubenswrapper[4958]: E1206 05:48:00.769450 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert podName:c6c9cc05-d00f-4f92-bd7e-13737952085b nodeName:}" failed. No retries permitted until 2025-12-06 05:48:04.769436474 +0000 UTC m=+1195.303207237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" (UID: "c6c9cc05-d00f-4f92-bd7e-13737952085b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:48:01 crc kubenswrapper[4958]: I1206 05:48:01.073928 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:01 crc kubenswrapper[4958]: I1206 05:48:01.073970 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:01 crc kubenswrapper[4958]: E1206 05:48:01.074103 4958 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 05:48:01 crc kubenswrapper[4958]: E1206 05:48:01.074137 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 05:48:01 crc kubenswrapper[4958]: E1206 05:48:01.074153 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:05.074139528 +0000 UTC m=+1195.607910291 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "metrics-server-cert" not found Dec 06 05:48:01 crc kubenswrapper[4958]: E1206 05:48:01.074223 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:05.07420612 +0000 UTC m=+1195.607976883 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "webhook-server-cert" not found Dec 06 05:48:04 crc kubenswrapper[4958]: I1206 05:48:04.525381 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:04 crc kubenswrapper[4958]: E1206 05:48:04.525555 4958 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 06 05:48:04 crc kubenswrapper[4958]: E1206 05:48:04.525835 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert podName:d0b85019-5501-4f67-a136-f7798be67039 nodeName:}" failed. No retries permitted until 2025-12-06 05:48:12.525820563 +0000 UTC m=+1203.059591326 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert") pod "infra-operator-controller-manager-57548d458d-wv9rd" (UID: "d0b85019-5501-4f67-a136-f7798be67039") : secret "infra-operator-webhook-server-cert" not found Dec 06 05:48:04 crc kubenswrapper[4958]: I1206 05:48:04.829331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:04 crc kubenswrapper[4958]: E1206 05:48:04.830713 4958 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:48:04 crc kubenswrapper[4958]: E1206 05:48:04.830788 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert podName:c6c9cc05-d00f-4f92-bd7e-13737952085b nodeName:}" failed. No retries permitted until 2025-12-06 05:48:12.830769494 +0000 UTC m=+1203.364540257 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert") pod "openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" (UID: "c6c9cc05-d00f-4f92-bd7e-13737952085b") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 06 05:48:05 crc kubenswrapper[4958]: I1206 05:48:05.136919 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:05 crc kubenswrapper[4958]: I1206 05:48:05.136963 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:05 crc kubenswrapper[4958]: E1206 05:48:05.137027 4958 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 06 05:48:05 crc kubenswrapper[4958]: E1206 05:48:05.137044 4958 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 06 05:48:05 crc kubenswrapper[4958]: E1206 05:48:05.137080 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:13.137065481 +0000 UTC m=+1203.670836244 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "webhook-server-cert" not found Dec 06 05:48:05 crc kubenswrapper[4958]: E1206 05:48:05.137095 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs podName:7de555d5-964f-4e85-a2b9-5dddf37e097e nodeName:}" failed. No retries permitted until 2025-12-06 05:48:13.137090382 +0000 UTC m=+1203.670861145 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs") pod "openstack-operator-controller-manager-54bdf956c4-dlf7m" (UID: "7de555d5-964f-4e85-a2b9-5dddf37e097e") : secret "metrics-server-cert" not found Dec 06 05:48:11 crc kubenswrapper[4958]: E1206 05:48:11.792095 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9" Dec 06 05:48:11 crc kubenswrapper[4958]: E1206 05:48:11.792924 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:2e59cfbeefc3aff0bb0a6ae9ce2235129f5173c98dd5ee8dac229ad4895faea9,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mpvdg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7c79b5df47-6vcd7_openstack-operators(6f35dbd3-cf3b-46f8-83cf-911ea6a88679): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.535872 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.554041 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d0b85019-5501-4f67-a136-f7798be67039-cert\") pod \"infra-operator-controller-manager-57548d458d-wv9rd\" (UID: \"d0b85019-5501-4f67-a136-f7798be67039\") " pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.652559 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-jp6cl" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.661802 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.840146 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.845402 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c6c9cc05-d00f-4f92-bd7e-13737952085b-cert\") pod \"openstack-baremetal-operator-controller-manager-55c85496f5nnvrk\" (UID: \"c6c9cc05-d00f-4f92-bd7e-13737952085b\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.989781 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zsmkn" Dec 06 05:48:12 crc kubenswrapper[4958]: I1206 05:48:12.998306 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.145386 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.145436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.148492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-metrics-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.153135 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/7de555d5-964f-4e85-a2b9-5dddf37e097e-webhook-certs\") pod \"openstack-operator-controller-manager-54bdf956c4-dlf7m\" (UID: \"7de555d5-964f-4e85-a2b9-5dddf37e097e\") " pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.320717 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-7xmm4" Dec 06 05:48:13 crc kubenswrapper[4958]: I1206 05:48:13.329960 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:26 crc kubenswrapper[4958]: E1206 05:48:26.058353 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 06 05:48:26 crc kubenswrapper[4958]: E1206 05:48:26.059072 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7fm6k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-jk9n8_openstack-operators(d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:48:27 crc kubenswrapper[4958]: E1206 05:48:27.089914 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 06 05:48:27 crc kubenswrapper[4958]: E1206 05:48:27.090397 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6gj7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-sq7fz_openstack-operators(137f3e2e-f835-48ca-873c-41fe38a6d7f2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:48:31 crc kubenswrapper[4958]: I1206 05:48:31.030963 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m"] Dec 06 05:48:32 crc kubenswrapper[4958]: I1206 05:48:32.961523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd"] Dec 06 05:48:33 crc kubenswrapper[4958]: I1206 05:48:33.356844 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk"] Dec 06 05:48:33 crc kubenswrapper[4958]: W1206 05:48:33.854683 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7de555d5_964f_4e85_a2b9_5dddf37e097e.slice/crio-a700d21d2306b2bac25a41d9c4d721477b4ec6037183655603db021b79ca70d3 WatchSource:0}: Error finding container a700d21d2306b2bac25a41d9c4d721477b4ec6037183655603db021b79ca70d3: Status 404 returned error can't find the container with id a700d21d2306b2bac25a41d9c4d721477b4ec6037183655603db021b79ca70d3 Dec 06 05:48:33 crc kubenswrapper[4958]: I1206 05:48:33.944503 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" event={"ID":"d0b85019-5501-4f67-a136-f7798be67039","Type":"ContainerStarted","Data":"32197ae737070609a5638a15338ea9d1bf389255f4c0166b2abb396993b52782"} Dec 06 05:48:33 crc kubenswrapper[4958]: I1206 05:48:33.945876 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" event={"ID":"c6c9cc05-d00f-4f92-bd7e-13737952085b","Type":"ContainerStarted","Data":"064f6b5d79d3da9b7454dc590f30655e3bed124e4c075f7fb950f3f0796d4afe"} Dec 06 05:48:33 crc kubenswrapper[4958]: I1206 05:48:33.947099 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" event={"ID":"7de555d5-964f-4e85-a2b9-5dddf37e097e","Type":"ContainerStarted","Data":"a700d21d2306b2bac25a41d9c4d721477b4ec6037183655603db021b79ca70d3"} Dec 06 05:48:34 crc kubenswrapper[4958]: I1206 05:48:34.971696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" event={"ID":"ebcc81d0-3595-480b-a886-1ec0e5da638d","Type":"ContainerStarted","Data":"aee1880d24d277836518e6e9ebc38e985454c1a8a69d5d0a4b7856854642c0fa"} Dec 06 05:48:34 crc kubenswrapper[4958]: I1206 05:48:34.975798 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" event={"ID":"15c5254b-bcb1-45a3-a94b-21995bd4a143","Type":"ContainerStarted","Data":"374bdcb61fab369e2ea835ee3a310ab308258c156232ad140ec00085bc2b0d9a"} Dec 06 05:48:34 crc kubenswrapper[4958]: I1206 05:48:34.977740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" event={"ID":"1e47b3e1-d11a-4a15-8a92-24fe19661ee7","Type":"ContainerStarted","Data":"342ed7b783a964f69a474b901bd473b2a2774e7843f155e64a169b22d09e947e"} Dec 06 05:48:34 crc kubenswrapper[4958]: I1206 05:48:34.979048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" event={"ID":"ddaab3a6-6e32-480b-8ba8-3852feb6440f","Type":"ContainerStarted","Data":"f7e8ac1afb0159fdc1bc05e1b2dfa9f71050e1cc799dcfaecfee302afb1e3c24"} Dec 06 05:48:35 crc kubenswrapper[4958]: I1206 05:48:35.001326 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" event={"ID":"6a934eab-3341-4e53-8317-eca91e0e9710","Type":"ContainerStarted","Data":"91ad98abf50bacf37ff6eff5af28d71e93f1b4178fb2c12225769c69fa700cd0"} Dec 06 05:48:35 crc kubenswrapper[4958]: I1206 05:48:35.008665 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" event={"ID":"f3792985-93a5-4b81-8ea2-ca63d1f659d8","Type":"ContainerStarted","Data":"71f0e378a9d0e9a2432984e07aa1403aa620409612a29efbe455caa4e48f8621"} Dec 06 05:48:35 crc kubenswrapper[4958]: I1206 05:48:35.024044 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" event={"ID":"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833","Type":"ContainerStarted","Data":"8543641e0c5e617a9d85d6f695e9872169fd4d246fcdda1e6039efd5687c1e21"} Dec 06 05:48:35 crc kubenswrapper[4958]: I1206 05:48:35.035382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" event={"ID":"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d","Type":"ContainerStarted","Data":"a03275b26c43666cc25de7f8dbc1a074434461c5c84578a5ad847462b3fadfac"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.052501 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" event={"ID":"a069026c-ab13-4593-9f99-71aa6fca2ecd","Type":"ContainerStarted","Data":"de8bf0bd2183ac957569b9e6fef31d6232973f338ab29a5605a52b971834908f"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.057407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" event={"ID":"7de555d5-964f-4e85-a2b9-5dddf37e097e","Type":"ContainerStarted","Data":"c86157eaa3e84ac5c331eb4dd419fca25d867db9daebd200cba8ca998f3056fd"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.057456 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.071122 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" event={"ID":"621f367e-fe95-4fb5-9ce7-966981c7b13a","Type":"ContainerStarted","Data":"9a4fb4bd7d0f4985122794578be9f656ee96aeba79360e486842e3386abaabc5"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.085580 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" event={"ID":"1a7e7d3c-d935-469c-8296-658d9b8542dc","Type":"ContainerStarted","Data":"900f4c7e47d6568dba5b6b196704a92df0926512d8039e8d410ecdba77a6d165"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.087099 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" event={"ID":"f3cf2219-d4b6-43cd-8ace-2852c808fe6e","Type":"ContainerStarted","Data":"639d6de8c42278ea6da07ce7436abfa8a14c1843273af43b422ccdc988ab4916"} Dec 06 05:48:36 crc kubenswrapper[4958]: I1206 05:48:36.091876 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" podStartSLOduration=39.091863295 podStartE2EDuration="39.091863295s" podCreationTimestamp="2025-12-06 05:47:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:48:36.090578291 +0000 UTC m=+1226.624349054" watchObservedRunningTime="2025-12-06 05:48:36.091863295 +0000 UTC m=+1226.625634058" Dec 06 05:48:38 crc kubenswrapper[4958]: I1206 05:48:38.104516 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" event={"ID":"59dc6399-d2b4-437a-9521-4096ed7e924f","Type":"ContainerStarted","Data":"de3eb9abb593281979f0e8e58242d62807832a1a9e80ba1029e8b542d81d66f9"} Dec 06 05:48:38 crc kubenswrapper[4958]: I1206 05:48:38.106625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" event={"ID":"26415490-b3ca-4822-93b9-f7fb5efcf375","Type":"ContainerStarted","Data":"517122eb52a407ef10d5069b90329fdb559451fec32588edfb7e55d3da3b9b0a"} Dec 06 05:48:38 crc kubenswrapper[4958]: I1206 05:48:38.108514 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" event={"ID":"e6abb542-4bf4-4edf-b150-de3f6200e4de","Type":"ContainerStarted","Data":"c24992614d418e095316e12c0bfdc361e03510f307e5630b19371cb2932e207f"} Dec 06 05:48:38 crc kubenswrapper[4958]: I1206 05:48:38.110603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" event={"ID":"930d98f5-bc89-466b-9876-ee5764f146f4","Type":"ContainerStarted","Data":"aef3f392816217c3fb9d072c237976224ae57ecb6dd7130c8c1c8486880f0c0e"} Dec 06 05:48:38 crc kubenswrapper[4958]: I1206 05:48:38.136595 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-jsjtj" podStartSLOduration=5.34280975 podStartE2EDuration="41.136572801s" podCreationTimestamp="2025-12-06 05:47:57 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.670776962 +0000 UTC m=+1189.204547725" lastFinishedPulling="2025-12-06 05:48:34.464540013 +0000 UTC m=+1224.998310776" observedRunningTime="2025-12-06 05:48:38.121388594 +0000 UTC m=+1228.655159377" watchObservedRunningTime="2025-12-06 05:48:38.136572801 +0000 UTC m=+1228.670343584" Dec 06 05:48:39 crc kubenswrapper[4958]: I1206 05:48:39.866003 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:48:39 crc kubenswrapper[4958]: I1206 05:48:39.866367 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:48:40 crc kubenswrapper[4958]: I1206 05:48:40.129144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" event={"ID":"5c330abd-909c-44eb-a7ff-7cb5398fd736","Type":"ContainerStarted","Data":"2137e1e621913084716eac0ff0d8fece46e75cdd135207633bfbc9b46c2a40f3"} Dec 06 05:48:43 crc kubenswrapper[4958]: I1206 05:48:43.337260 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-54bdf956c4-dlf7m" Dec 06 05:48:49 crc kubenswrapper[4958]: E1206 05:48:49.395083 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" podUID="d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c" Dec 06 05:48:49 crc kubenswrapper[4958]: E1206 05:48:49.414117 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" podUID="137f3e2e-f835-48ca-873c-41fe38a6d7f2" Dec 06 05:48:49 crc kubenswrapper[4958]: E1206 05:48:49.492187 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" podUID="6f35dbd3-cf3b-46f8-83cf-911ea6a88679" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.222203 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" event={"ID":"17c0b87a-3ce4-434e-bbbc-cf06bd3c2833","Type":"ContainerStarted","Data":"3fc0714f12d00d42669aa588220dd1586267e605dd073b4ca02f83657c7a0aa2"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.223614 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.226303 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.226387 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" event={"ID":"ddaab3a6-6e32-480b-8ba8-3852feb6440f","Type":"ContainerStarted","Data":"4ce1da664d3cc5c6599fdea3ef90c701798dec234539150a1f3178a4fe95aa35"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.227195 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.228410 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" event={"ID":"621f367e-fe95-4fb5-9ce7-966981c7b13a","Type":"ContainerStarted","Data":"31982d91cbdd4d599bbb1bc3fb22e967f629ddd616f8f2a832f4f8ce77adeb56"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.229001 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.230486 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.230596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" event={"ID":"137f3e2e-f835-48ca-873c-41fe38a6d7f2","Type":"ContainerStarted","Data":"f880f6d50cc74eef51ae864219ea000d8c8dea9f37238a52f4ed11d68f19a91d"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.234235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" event={"ID":"d0b85019-5501-4f67-a136-f7798be67039","Type":"ContainerStarted","Data":"a32addc253de60d853f359b9b8d03f8f4d181f49ca0927231ab64db530cbadfd"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.234261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" event={"ID":"d0b85019-5501-4f67-a136-f7798be67039","Type":"ContainerStarted","Data":"2f3c37ce36cc54a3250ec03c34996ff8f16417e7ab390ef630cd5c19d0da9ff4"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.234651 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.235963 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.237556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" event={"ID":"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c","Type":"ContainerStarted","Data":"50818594cc65a11968c74b49793f878d4950582e7795bba05f91d9e47334c618"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.243960 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" event={"ID":"d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d","Type":"ContainerStarted","Data":"ea9246aba3f19a4fcd0097c729adfc8fe746494015de3d87c65198055d4a34b3"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.245366 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.254793 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.259408 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" event={"ID":"a069026c-ab13-4593-9f99-71aa6fca2ecd","Type":"ContainerStarted","Data":"a271448eb448cfe4665e8865120c3d89b71ceb9e24bb4ea4a87f4aa285df2505"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.260401 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.266715 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.273707 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-v2ktl" podStartSLOduration=3.349418805 podStartE2EDuration="54.273686383s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:57.998369285 +0000 UTC m=+1188.532140048" lastFinishedPulling="2025-12-06 05:48:48.922636853 +0000 UTC m=+1239.456407626" observedRunningTime="2025-12-06 05:48:50.26200396 +0000 UTC m=+1240.795774723" watchObservedRunningTime="2025-12-06 05:48:50.273686383 +0000 UTC m=+1240.807457146" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.286654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" event={"ID":"f3792985-93a5-4b81-8ea2-ca63d1f659d8","Type":"ContainerStarted","Data":"10ff4dfdb25a2d3e8fff6f8cdc70fd2ab683804a18b83c2f6f5beef26a5ed0bb"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.287620 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.292847 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.309221 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" event={"ID":"26415490-b3ca-4822-93b9-f7fb5efcf375","Type":"ContainerStarted","Data":"ab33f0c6d7af6dfad1cb8a22595c169ff0b9c5782c53538899acc0390af1c93b"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.309693 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.313780 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.330422 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-769dc69bc-vwbhb" podStartSLOduration=3.225138868 podStartE2EDuration="53.33040593s" podCreationTimestamp="2025-12-06 05:47:57 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.799844531 +0000 UTC m=+1189.333615294" lastFinishedPulling="2025-12-06 05:48:48.905111593 +0000 UTC m=+1239.438882356" observedRunningTime="2025-12-06 05:48:50.32815377 +0000 UTC m=+1240.861924533" watchObservedRunningTime="2025-12-06 05:48:50.33040593 +0000 UTC m=+1240.864176693" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.361091 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" event={"ID":"6a934eab-3341-4e53-8317-eca91e0e9710","Type":"ContainerStarted","Data":"fa491c1edc26f0415b11ad00315065263bf362ede98a99d22efbf7a16080b8d8"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.362144 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.373819 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.404733 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-77987cd8cd-bg69j" podStartSLOduration=3.682175853 podStartE2EDuration="54.404712778s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.122536852 +0000 UTC m=+1188.656307615" lastFinishedPulling="2025-12-06 05:48:48.845073777 +0000 UTC m=+1239.378844540" observedRunningTime="2025-12-06 05:48:50.403006563 +0000 UTC m=+1240.936777326" watchObservedRunningTime="2025-12-06 05:48:50.404712778 +0000 UTC m=+1240.938483551" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.410101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" event={"ID":"1e47b3e1-d11a-4a15-8a92-24fe19661ee7","Type":"ContainerStarted","Data":"d0035a0ad737d0e5844bea77d1e6c59d47fd7abfe72c39322412fe5db2815015"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.410953 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.418761 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.437073 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-h2s2k" podStartSLOduration=4.387233524 podStartE2EDuration="54.437059163s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.316751005 +0000 UTC m=+1188.850521768" lastFinishedPulling="2025-12-06 05:48:48.366576644 +0000 UTC m=+1238.900347407" observedRunningTime="2025-12-06 05:48:50.435530153 +0000 UTC m=+1240.969300916" watchObservedRunningTime="2025-12-06 05:48:50.437059163 +0000 UTC m=+1240.970829926" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.437784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" event={"ID":"c6c9cc05-d00f-4f92-bd7e-13737952085b","Type":"ContainerStarted","Data":"3add99095a81560f19586aa838f793896dd329b248e3bddfe0c13075644c6fb3"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.437830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" event={"ID":"c6c9cc05-d00f-4f92-bd7e-13737952085b","Type":"ContainerStarted","Data":"2309fca09685b86547b933e3ba7de627b060d7e6905a9dc76c87518ee0cca045"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.438534 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.466130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" event={"ID":"6f35dbd3-cf3b-46f8-83cf-911ea6a88679","Type":"ContainerStarted","Data":"43eb2735db9a1f2957739210d3716c6120662748667774292986e7a553fd0e93"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.483773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" event={"ID":"930d98f5-bc89-466b-9876-ee5764f146f4","Type":"ContainerStarted","Data":"bdc58e2f29058116545d7c67c6f597ca85ef9fdefa512bdb23b8b55e6e481bf6"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.484320 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.497325 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.499316 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-78b4bc895b-khhwt" podStartSLOduration=3.380283491 podStartE2EDuration="54.499293689s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:57.727836277 +0000 UTC m=+1188.261607050" lastFinishedPulling="2025-12-06 05:48:48.846846455 +0000 UTC m=+1239.380617248" observedRunningTime="2025-12-06 05:48:50.493751631 +0000 UTC m=+1241.027522384" watchObservedRunningTime="2025-12-06 05:48:50.499293689 +0000 UTC m=+1241.033064452" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.502012 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" event={"ID":"ebcc81d0-3595-480b-a886-1ec0e5da638d","Type":"ContainerStarted","Data":"9fa585b137f22e1969ae3e6a7ece8986399ce22f71dd55efdb5de1d1bb320fad"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.502712 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.519442 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.527862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" event={"ID":"1a7e7d3c-d935-469c-8296-658d9b8542dc","Type":"ContainerStarted","Data":"844d7a5e001f9c2d351dc6bc2279df057440a41044ef22af3ecb1750b0361fa5"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.528856 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.555367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" event={"ID":"5c330abd-909c-44eb-a7ff-7cb5398fd736","Type":"ContainerStarted","Data":"30b3300598dfc246fbc341e2aecf86872832b8734a4474172cc98cc2f2c7d8ba"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.555994 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.556159 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.568251 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.571180 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" podStartSLOduration=40.301670328 podStartE2EDuration="54.571150732s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:48:33.877033301 +0000 UTC m=+1224.410804104" lastFinishedPulling="2025-12-06 05:48:48.146513745 +0000 UTC m=+1238.680284508" observedRunningTime="2025-12-06 05:48:50.522697036 +0000 UTC m=+1241.056467819" watchObservedRunningTime="2025-12-06 05:48:50.571150732 +0000 UTC m=+1241.104921495" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.571862 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-dn98s" podStartSLOduration=4.30055425 podStartE2EDuration="54.571854011s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.652144303 +0000 UTC m=+1189.185915066" lastFinishedPulling="2025-12-06 05:48:48.923444044 +0000 UTC m=+1239.457214827" observedRunningTime="2025-12-06 05:48:50.5557687 +0000 UTC m=+1241.089539463" watchObservedRunningTime="2025-12-06 05:48:50.571854011 +0000 UTC m=+1241.105624774" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.572639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" event={"ID":"e6abb542-4bf4-4edf-b150-de3f6200e4de","Type":"ContainerStarted","Data":"5091053ea182420427a1cc0784985d648b72aea0767700fa2ef5879f838ea989"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.573109 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.585913 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.598862 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" event={"ID":"15c5254b-bcb1-45a3-a94b-21995bd4a143","Type":"ContainerStarted","Data":"16be26374cffee1c2a2b9fc3cd812ec44578541ca238472422e6551a9b009b56"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.600249 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.610238 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-gnstm" podStartSLOduration=4.207573286 podStartE2EDuration="54.610218097s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.520916696 +0000 UTC m=+1189.054687459" lastFinishedPulling="2025-12-06 05:48:48.923561467 +0000 UTC m=+1239.457332270" observedRunningTime="2025-12-06 05:48:50.586758589 +0000 UTC m=+1241.120529352" watchObservedRunningTime="2025-12-06 05:48:50.610218097 +0000 UTC m=+1241.143988860" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.612887 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-654tb" podStartSLOduration=4.201324829 podStartE2EDuration="54.612877618s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.554541117 +0000 UTC m=+1189.088311880" lastFinishedPulling="2025-12-06 05:48:48.966093896 +0000 UTC m=+1239.499864669" observedRunningTime="2025-12-06 05:48:50.610868015 +0000 UTC m=+1241.144638778" watchObservedRunningTime="2025-12-06 05:48:50.612877618 +0000 UTC m=+1241.146648381" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.617589 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.637914 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" event={"ID":"f3cf2219-d4b6-43cd-8ace-2852c808fe6e","Type":"ContainerStarted","Data":"2d8c36fd22c40846861edca84acf5a0f7fa733df7fb12eee5e375a55991f1b43"} Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.639249 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.660226 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.710550 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-n2s2h" podStartSLOduration=4.922357306 podStartE2EDuration="54.710532391s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.356373718 +0000 UTC m=+1188.890144471" lastFinishedPulling="2025-12-06 05:48:48.144548783 +0000 UTC m=+1238.678319556" observedRunningTime="2025-12-06 05:48:50.639627604 +0000 UTC m=+1241.173398377" watchObservedRunningTime="2025-12-06 05:48:50.710532391 +0000 UTC m=+1241.244303154" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.721366 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-859b6ccc6-kfk67" podStartSLOduration=4.384586309 podStartE2EDuration="54.72134844s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:57.915581107 +0000 UTC m=+1188.449351870" lastFinishedPulling="2025-12-06 05:48:48.252343238 +0000 UTC m=+1238.786114001" observedRunningTime="2025-12-06 05:48:50.710197763 +0000 UTC m=+1241.243968526" watchObservedRunningTime="2025-12-06 05:48:50.72134844 +0000 UTC m=+1241.255119203" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.758862 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" podStartSLOduration=40.562820254 podStartE2EDuration="54.758846824s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:48:33.877224566 +0000 UTC m=+1224.410995369" lastFinishedPulling="2025-12-06 05:48:48.073251186 +0000 UTC m=+1238.607021939" observedRunningTime="2025-12-06 05:48:50.752275379 +0000 UTC m=+1241.286046132" watchObservedRunningTime="2025-12-06 05:48:50.758846824 +0000 UTC m=+1241.292617587" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.908593 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-5f8c65bbfc-nkrvh" podStartSLOduration=4.648095249 podStartE2EDuration="54.9085778s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.663088266 +0000 UTC m=+1189.196859029" lastFinishedPulling="2025-12-06 05:48:48.923570797 +0000 UTC m=+1239.457341580" observedRunningTime="2025-12-06 05:48:50.869798523 +0000 UTC m=+1241.403569286" watchObservedRunningTime="2025-12-06 05:48:50.9085778 +0000 UTC m=+1241.442348563" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.908755 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-26667" podStartSLOduration=4.529416019 podStartE2EDuration="54.908751435s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.544159489 +0000 UTC m=+1189.077930252" lastFinishedPulling="2025-12-06 05:48:48.923494885 +0000 UTC m=+1239.457265668" observedRunningTime="2025-12-06 05:48:50.898216253 +0000 UTC m=+1241.431987006" watchObservedRunningTime="2025-12-06 05:48:50.908751435 +0000 UTC m=+1241.442522198" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.933389 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-vm7kq" podStartSLOduration=4.790185231 podStartE2EDuration="54.933371164s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.666064576 +0000 UTC m=+1189.199835339" lastFinishedPulling="2025-12-06 05:48:48.809250489 +0000 UTC m=+1239.343021272" observedRunningTime="2025-12-06 05:48:50.927740223 +0000 UTC m=+1241.461510986" watchObservedRunningTime="2025-12-06 05:48:50.933371164 +0000 UTC m=+1241.467141927" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.960825 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-56bbcc9d85-7745k" podStartSLOduration=4.587461294 podStartE2EDuration="54.960804797s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.352777251 +0000 UTC m=+1188.886548024" lastFinishedPulling="2025-12-06 05:48:48.726120734 +0000 UTC m=+1239.259891527" observedRunningTime="2025-12-06 05:48:50.959617776 +0000 UTC m=+1241.493388539" watchObservedRunningTime="2025-12-06 05:48:50.960804797 +0000 UTC m=+1241.494575560" Dec 06 05:48:50 crc kubenswrapper[4958]: I1206 05:48:50.985754 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-6c548fd776-hx8tj" podStartSLOduration=4.090523466 podStartE2EDuration="54.985739885s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.024181627 +0000 UTC m=+1188.557952390" lastFinishedPulling="2025-12-06 05:48:48.919398036 +0000 UTC m=+1239.453168809" observedRunningTime="2025-12-06 05:48:50.982422846 +0000 UTC m=+1241.516193609" watchObservedRunningTime="2025-12-06 05:48:50.985739885 +0000 UTC m=+1241.519510648" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.020212 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-76cc84c6bb-ptqzr" podStartSLOduration=4.875252817 podStartE2EDuration="55.020197017s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.664623727 +0000 UTC m=+1189.198394490" lastFinishedPulling="2025-12-06 05:48:48.809567917 +0000 UTC m=+1239.343338690" observedRunningTime="2025-12-06 05:48:51.013416615 +0000 UTC m=+1241.547187378" watchObservedRunningTime="2025-12-06 05:48:51.020197017 +0000 UTC m=+1241.553967780" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.661349 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" event={"ID":"6f35dbd3-cf3b-46f8-83cf-911ea6a88679","Type":"ContainerStarted","Data":"4845b94214628c842a103c6b3f825fa255bf647f4960931664cdb331fcce547d"} Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.661465 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.664630 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" event={"ID":"137f3e2e-f835-48ca-873c-41fe38a6d7f2","Type":"ContainerStarted","Data":"9d2bfa42149bd5b3ea68f78ad41bc92320aed852a34c24e1b2cdee790ddcc4c8"} Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.664875 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.666905 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" event={"ID":"d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c","Type":"ContainerStarted","Data":"39110c2ee6aa15c3c65ec924038697ecb1677bd18ad152b5389a427948332626"} Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.703074 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" podStartSLOduration=3.051806886 podStartE2EDuration="55.703051408s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.361276219 +0000 UTC m=+1188.895046982" lastFinishedPulling="2025-12-06 05:48:51.012520741 +0000 UTC m=+1241.546291504" observedRunningTime="2025-12-06 05:48:51.699051021 +0000 UTC m=+1242.232821794" watchObservedRunningTime="2025-12-06 05:48:51.703051408 +0000 UTC m=+1242.236822181" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.732955 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" podStartSLOduration=3.492467063 podStartE2EDuration="55.732932237s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.546278176 +0000 UTC m=+1189.080048939" lastFinishedPulling="2025-12-06 05:48:50.78674335 +0000 UTC m=+1241.320514113" observedRunningTime="2025-12-06 05:48:51.721365178 +0000 UTC m=+1242.255135981" watchObservedRunningTime="2025-12-06 05:48:51.732932237 +0000 UTC m=+1242.266703010" Dec 06 05:48:51 crc kubenswrapper[4958]: I1206 05:48:51.741605 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" podStartSLOduration=3.308798261 podStartE2EDuration="55.741586618s" podCreationTimestamp="2025-12-06 05:47:56 +0000 UTC" firstStartedPulling="2025-12-06 05:47:58.354078656 +0000 UTC m=+1188.887849419" lastFinishedPulling="2025-12-06 05:48:50.786867013 +0000 UTC m=+1241.320637776" observedRunningTime="2025-12-06 05:48:51.739940734 +0000 UTC m=+1242.273711507" watchObservedRunningTime="2025-12-06 05:48:51.741586618 +0000 UTC m=+1242.275357391" Dec 06 05:48:52 crc kubenswrapper[4958]: I1206 05:48:52.677214 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:48:57 crc kubenswrapper[4958]: I1206 05:48:57.102113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-sq7fz" Dec 06 05:48:57 crc kubenswrapper[4958]: I1206 05:48:57.264323 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7c79b5df47-6vcd7" Dec 06 05:48:57 crc kubenswrapper[4958]: I1206 05:48:57.335344 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-jk9n8" Dec 06 05:49:02 crc kubenswrapper[4958]: I1206 05:49:02.668129 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57548d458d-wv9rd" Dec 06 05:49:03 crc kubenswrapper[4958]: I1206 05:49:03.008050 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" Dec 06 05:49:09 crc kubenswrapper[4958]: I1206 05:49:09.866111 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:49:09 crc kubenswrapper[4958]: I1206 05:49:09.867395 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:49:10 crc kubenswrapper[4958]: I1206 05:49:10.239611 4958 scope.go:117] "RemoveContainer" containerID="435e4d068ff07544448d63173a43f20b6334833cd845dab1d36116ff51cb840e" Dec 06 05:49:10 crc kubenswrapper[4958]: I1206 05:49:10.260820 4958 scope.go:117] "RemoveContainer" containerID="d7f2ecaafde73e75c955672f9bf666232a1b361434c578b5a62af4cc10981654" Dec 06 05:49:10 crc kubenswrapper[4958]: I1206 05:49:10.286485 4958 scope.go:117] "RemoveContainer" containerID="2317a91a98bccefc26f5566a42dcf6887c1ffe2f5bdafbed98a1d9859fe84017" Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.982971 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.984953 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.987249 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.987552 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-nkcnp" Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.987585 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 06 05:49:19 crc kubenswrapper[4958]: I1206 05:49:19.988699 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.042869 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.070209 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.071371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.073300 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.080321 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.159112 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z45s\" (UniqueName: \"kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.159177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.260446 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6cxv\" (UniqueName: \"kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.260557 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z45s\" (UniqueName: \"kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.260595 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.261264 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.261709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.262162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.283074 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z45s\" (UniqueName: \"kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s\") pod \"dnsmasq-dns-8468885bfc-kqpbq\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.310148 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.364832 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.364885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.364918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6cxv\" (UniqueName: \"kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.365647 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.366199 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.383745 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6cxv\" (UniqueName: \"kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv\") pod \"dnsmasq-dns-545d49fd5c-sv9vr\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.384060 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.764543 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.834709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:49:20 crc kubenswrapper[4958]: W1206 05:49:20.840962 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod06826d6b_2bdb_47fa_8239_6ffe454d7ca2.slice/crio-d6522cce6073dad7c3e6aa97e6d9e78aa43f0904dadda49dc8363720cb987047 WatchSource:0}: Error finding container d6522cce6073dad7c3e6aa97e6d9e78aa43f0904dadda49dc8363720cb987047: Status 404 returned error can't find the container with id d6522cce6073dad7c3e6aa97e6d9e78aa43f0904dadda49dc8363720cb987047 Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.892191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" event={"ID":"d945d51a-1515-419b-9fe8-1303b1e56ef4","Type":"ContainerStarted","Data":"5c93258c2dc0c2aeb08f8a17395dfd9a163f4cfdc4614351c63667b6a1633b14"} Dec 06 05:49:20 crc kubenswrapper[4958]: I1206 05:49:20.893224 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" event={"ID":"06826d6b-2bdb-47fa-8239-6ffe454d7ca2","Type":"ContainerStarted","Data":"d6522cce6073dad7c3e6aa97e6d9e78aa43f0904dadda49dc8363720cb987047"} Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.223043 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.263462 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.264965 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.277599 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.423947 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls7hn\" (UniqueName: \"kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.424016 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.424043 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.525372 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls7hn\" (UniqueName: \"kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.525463 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.525524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.526455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.527583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.551049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls7hn\" (UniqueName: \"kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn\") pod \"dnsmasq-dns-9bd5d9d8c-st47d\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.567419 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.584898 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.588734 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.592805 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.610182 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.730164 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwrzx\" (UniqueName: \"kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.730214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.730289 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.832718 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.833760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.833825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.833895 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwrzx\" (UniqueName: \"kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.834924 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.835442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.859701 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.860922 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.879385 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.891379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwrzx\" (UniqueName: \"kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx\") pod \"dnsmasq-dns-bcf47f659-fcmcv\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:24 crc kubenswrapper[4958]: I1206 05:49:24.914930 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.040231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.040288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.040321 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbkbp\" (UniqueName: \"kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.141521 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.141592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.141640 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbkbp\" (UniqueName: \"kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.142866 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.143661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.179273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbkbp\" (UniqueName: \"kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp\") pod \"dnsmasq-dns-5449989c59-qwcjk\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.233016 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.237146 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.410191 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.411751 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.418791 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.418993 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.419102 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.419223 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.419627 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b5ghc" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.419727 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.425869 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.462698 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.507761 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:49:25 crc kubenswrapper[4958]: W1206 05:49:25.522051 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc31aa749_ff33_4a9f_a9d6_de9575956925.slice/crio-d3f934eb075a4f9df71272c7cfe03993836c38ce4da9333e7c1ea78ab4724d1a WatchSource:0}: Error finding container d3f934eb075a4f9df71272c7cfe03993836c38ce4da9333e7c1ea78ab4724d1a: Status 404 returned error can't find the container with id d3f934eb075a4f9df71272c7cfe03993836c38ce4da9333e7c1ea78ab4724d1a Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552146 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552167 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552188 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552202 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b94v\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552232 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552258 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.552353 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653284 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653350 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653548 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653567 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653617 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.653661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b94v\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.655919 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.659828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.660183 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.660380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.663319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.663746 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.668730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.669779 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.670240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.681923 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b94v\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.684210 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.684498 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.685720 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.688340 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-config-data" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.688410 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-server-conf" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.688619 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-erlang-cookie" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.688729 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-notifications-svc" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.688832 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-notifications-plugins-conf" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.689131 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-server-dockercfg-p9lgl" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.690700 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-notifications-default-user" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.709564 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.727261 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.751027 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.819189 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.855880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.855937 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.855962 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.855982 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856021 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856049 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856087 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz9rr\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-kube-api-access-kz9rr\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856120 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74d63159-9580-4b70-ba89-74d4d9eeb7b8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856147 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856181 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74d63159-9580-4b70-ba89-74d4d9eeb7b8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.856214 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957181 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957209 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz9rr\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-kube-api-access-kz9rr\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74d63159-9580-4b70-ba89-74d4d9eeb7b8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957258 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74d63159-9580-4b70-ba89-74d4d9eeb7b8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957305 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957338 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957364 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957383 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.957401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.958195 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.958710 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-plugins\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.959137 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-plugins-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.959378 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-erlang-cookie\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.959516 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-server-conf\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.959942 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/74d63159-9580-4b70-ba89-74d4d9eeb7b8-config-data\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.961535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-tls\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.961621 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/74d63159-9580-4b70-ba89-74d4d9eeb7b8-pod-info\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.964409 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-rabbitmq-confd\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.964531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" event={"ID":"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e","Type":"ContainerStarted","Data":"5056483f44d5c51b92b0eadc84f04e1a2364be2b46302b171e45122718232326"} Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.964870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/74d63159-9580-4b70-ba89-74d4d9eeb7b8-erlang-cookie-secret\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.967854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" event={"ID":"c31aa749-ff33-4a9f-a9d6-de9575956925","Type":"ContainerStarted","Data":"d3f934eb075a4f9df71272c7cfe03993836c38ce4da9333e7c1ea78ab4724d1a"} Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.969748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" event={"ID":"64da7fa6-1a07-438b-acb7-6eae565a3aa5","Type":"ContainerStarted","Data":"f36ef116341c4bfc018ca67342969ef4c73177179a2bb463513a3c2751eef7ea"} Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.979220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz9rr\" (UniqueName: \"kubernetes.io/projected/74d63159-9580-4b70-ba89-74d4d9eeb7b8-kube-api-access-kz9rr\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:25 crc kubenswrapper[4958]: I1206 05:49:25.994669 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"rabbitmq-notifications-server-0\" (UID: \"74d63159-9580-4b70-ba89-74d4d9eeb7b8\") " pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.001776 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.003567 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.008430 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.008553 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mtf4d" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.008784 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.008879 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.008789 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.009107 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.009237 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.021726 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.060553 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.160961 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161081 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161294 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161676 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161865 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.161966 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.162147 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxxfp\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.211415 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:49:26 crc kubenswrapper[4958]: W1206 05:49:26.228784 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3141e77c_a73b_400b_b607_21be8537cca4.slice/crio-0769ff1780c2d2933a7fd285c9ba748d3a32329376a6f34692df6bdcd911b316 WatchSource:0}: Error finding container 0769ff1780c2d2933a7fd285c9ba748d3a32329376a6f34692df6bdcd911b316: Status 404 returned error can't find the container with id 0769ff1780c2d2933a7fd285c9ba748d3a32329376a6f34692df6bdcd911b316 Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264224 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264260 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264279 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264299 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264349 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxxfp\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264433 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.264528 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.265595 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.265680 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.266573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.267873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.268386 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.277956 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.278418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.278931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.279031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.279486 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.286574 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxxfp\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.313742 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.334188 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.615332 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-notifications-server-0"] Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.915008 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:49:26 crc kubenswrapper[4958]: W1206 05:49:26.963913 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc06a17_7bdb_4ee9_bad3_7996be041e54.slice/crio-dfda01df8c1a79e729ec331515447f2c5c29bcc8de1be89743c760b8d9a3d6d8 WatchSource:0}: Error finding container dfda01df8c1a79e729ec331515447f2c5c29bcc8de1be89743c760b8d9a3d6d8: Status 404 returned error can't find the container with id dfda01df8c1a79e729ec331515447f2c5c29bcc8de1be89743c760b8d9a3d6d8 Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.991002 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"74d63159-9580-4b70-ba89-74d4d9eeb7b8","Type":"ContainerStarted","Data":"605a24809c17c01add134d845c66c4b3ab281dd96b9924aacf5f74f0fabf800d"} Dec 06 05:49:26 crc kubenswrapper[4958]: I1206 05:49:26.997931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerStarted","Data":"dfda01df8c1a79e729ec331515447f2c5c29bcc8de1be89743c760b8d9a3d6d8"} Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.001502 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerStarted","Data":"0769ff1780c2d2933a7fd285c9ba748d3a32329376a6f34692df6bdcd911b316"} Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.688662 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.691879 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.695848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.695956 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-ptmzs" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.696005 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.697624 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.699013 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.720183 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796562 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796619 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kolla-config\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796667 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796696 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wszn\" (UniqueName: \"kubernetes.io/projected/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kube-api-access-7wszn\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796713 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-default\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796760 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.796796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898167 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wszn\" (UniqueName: \"kubernetes.io/projected/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kube-api-access-7wszn\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898564 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-default\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898630 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898732 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898885 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kolla-config\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.898961 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.899280 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.900367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-default\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.900946 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.904337 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.908082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0523eb0f-9fe1-49d4-a3b4-6a872317c136-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.908262 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kolla-config\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.917156 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0523eb0f-9fe1-49d4-a3b4-6a872317c136-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.918383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wszn\" (UniqueName: \"kubernetes.io/projected/0523eb0f-9fe1-49d4-a3b4-6a872317c136-kube-api-access-7wszn\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:27 crc kubenswrapper[4958]: I1206 05:49:27.929643 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-0\" (UID: \"0523eb0f-9fe1-49d4-a3b4-6a872317c136\") " pod="openstack/openstack-galera-0" Dec 06 05:49:28 crc kubenswrapper[4958]: I1206 05:49:28.011597 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 06 05:49:28 crc kubenswrapper[4958]: I1206 05:49:28.574108 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 06 05:49:28 crc kubenswrapper[4958]: W1206 05:49:28.606894 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0523eb0f_9fe1_49d4_a3b4_6a872317c136.slice/crio-75bb35711ce5570e3a73c49e1ef2d55d9073166c8c09d1c559f0ef4c634151f1 WatchSource:0}: Error finding container 75bb35711ce5570e3a73c49e1ef2d55d9073166c8c09d1c559f0ef4c634151f1: Status 404 returned error can't find the container with id 75bb35711ce5570e3a73c49e1ef2d55d9073166c8c09d1c559f0ef4c634151f1 Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.114434 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.116386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0523eb0f-9fe1-49d4-a3b4-6a872317c136","Type":"ContainerStarted","Data":"75bb35711ce5570e3a73c49e1ef2d55d9073166c8c09d1c559f0ef4c634151f1"} Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.116527 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.121810 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.122393 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.122711 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.129791 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-tp8js" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.131360 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234389 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234548 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234577 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234695 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.234756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd47d\" (UniqueName: \"kubernetes.io/projected/e2701d0b-9691-44fb-a540-796260e0f2c1-kube-api-access-hd47d\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335734 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335777 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335810 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd47d\" (UniqueName: \"kubernetes.io/projected/e2701d0b-9691-44fb-a540-796260e0f2c1-kube-api-access-hd47d\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335842 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335948 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.335977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.336229 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.337208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.337761 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.342748 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.345006 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e2701d0b-9691-44fb-a540-796260e0f2c1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.354953 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.359157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd47d\" (UniqueName: \"kubernetes.io/projected/e2701d0b-9691-44fb-a540-796260e0f2c1-kube-api-access-hd47d\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.372783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.373788 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2701d0b-9691-44fb-a540-796260e0f2c1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e2701d0b-9691-44fb-a540-796260e0f2c1\") " pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.437803 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.463821 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.467744 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.470347 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.470433 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.470446 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sg8wd" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.480175 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.543594 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-config-data\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.543685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82m8j\" (UniqueName: \"kubernetes.io/projected/d1076143-5994-4717-9d22-a56c404bc73b-kube-api-access-82m8j\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.543794 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.543877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.546440 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-kolla-config\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.647975 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-kolla-config\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.648053 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-config-data\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.648103 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-82m8j\" (UniqueName: \"kubernetes.io/projected/d1076143-5994-4717-9d22-a56c404bc73b-kube-api-access-82m8j\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.648130 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.648159 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.649146 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-kolla-config\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.649696 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d1076143-5994-4717-9d22-a56c404bc73b-config-data\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.655314 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.656929 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d1076143-5994-4717-9d22-a56c404bc73b-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.681634 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-82m8j\" (UniqueName: \"kubernetes.io/projected/d1076143-5994-4717-9d22-a56c404bc73b-kube-api-access-82m8j\") pod \"memcached-0\" (UID: \"d1076143-5994-4717-9d22-a56c404bc73b\") " pod="openstack/memcached-0" Dec 06 05:49:29 crc kubenswrapper[4958]: I1206 05:49:29.805510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 06 05:49:30 crc kubenswrapper[4958]: I1206 05:49:30.225672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 06 05:49:30 crc kubenswrapper[4958]: W1206 05:49:30.232025 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2701d0b_9691_44fb_a540_796260e0f2c1.slice/crio-d44530f81efcb8b81ad193d2a91788eb78f087e29feb7eddbd2ec663b462bb64 WatchSource:0}: Error finding container d44530f81efcb8b81ad193d2a91788eb78f087e29feb7eddbd2ec663b462bb64: Status 404 returned error can't find the container with id d44530f81efcb8b81ad193d2a91788eb78f087e29feb7eddbd2ec663b462bb64 Dec 06 05:49:30 crc kubenswrapper[4958]: I1206 05:49:30.503867 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.216889 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d1076143-5994-4717-9d22-a56c404bc73b","Type":"ContainerStarted","Data":"2b25afc32f7808335d965b13d170deed4cfa5cc90b38942e2ab7f788d9b24b6a"} Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.220330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2701d0b-9691-44fb-a540-796260e0f2c1","Type":"ContainerStarted","Data":"d44530f81efcb8b81ad193d2a91788eb78f087e29feb7eddbd2ec663b462bb64"} Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.346797 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.353629 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.357166 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h7dxg" Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.366544 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.385047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvt45\" (UniqueName: \"kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45\") pod \"kube-state-metrics-0\" (UID: \"636ba8d3-ffe7-42ba-85eb-0cd2da08036d\") " pod="openstack/kube-state-metrics-0" Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.489990 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvt45\" (UniqueName: \"kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45\") pod \"kube-state-metrics-0\" (UID: \"636ba8d3-ffe7-42ba-85eb-0cd2da08036d\") " pod="openstack/kube-state-metrics-0" Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.551678 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvt45\" (UniqueName: \"kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45\") pod \"kube-state-metrics-0\" (UID: \"636ba8d3-ffe7-42ba-85eb-0cd2da08036d\") " pod="openstack/kube-state-metrics-0" Dec 06 05:49:31 crc kubenswrapper[4958]: I1206 05:49:31.692084 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.732238 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.735004 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.741494 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.741531 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.741767 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.742035 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.744039 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvqpz" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.745954 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.763193 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.821757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.821869 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.821928 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.821988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqggf\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.822036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.822058 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.822093 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.822129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.923991 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqggf\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924056 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924140 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924250 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.924289 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.928417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.928569 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.928839 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7e3431a5196bc2909b8ac76f6bdd967b074e179639976d5fb690f175bc86873e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.933208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.935562 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.940639 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.942052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.949267 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.957417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqggf\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.983342 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:32 crc kubenswrapper[4958]: I1206 05:49:32.995986 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:49:33 crc kubenswrapper[4958]: I1206 05:49:33.063140 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:49:33 crc kubenswrapper[4958]: I1206 05:49:33.253952 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"636ba8d3-ffe7-42ba-85eb-0cd2da08036d","Type":"ContainerStarted","Data":"4a74ee9c5f5b84f5bac2c68a782f4f67f79194d4192462ca2e55b1b90883af13"} Dec 06 05:49:33 crc kubenswrapper[4958]: I1206 05:49:33.525971 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:49:34 crc kubenswrapper[4958]: I1206 05:49:34.286792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerStarted","Data":"5116d0f79179dcb89c87456d6085279504593b4364fdecd27adc5c860f6ff932"} Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.084927 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.086439 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.093518 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.094941 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.095105 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.095895 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-npgjv" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.097121 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.106252 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174236 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174566 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174676 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svczs\" (UniqueName: \"kubernetes.io/projected/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-kube-api-access-svczs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174755 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174777 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.174842 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279044 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279185 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279235 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-svczs\" (UniqueName: \"kubernetes.io/projected/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-kube-api-access-svczs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.279339 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.280368 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.280403 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.281180 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-config\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.284716 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.289784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.289810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.289787 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.297649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-svczs\" (UniqueName: \"kubernetes.io/projected/7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7-kube-api-access-svczs\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.319452 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"ovsdbserver-nb-0\" (UID: \"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7\") " pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:35 crc kubenswrapper[4958]: I1206 05:49:35.423155 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.052144 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.345504 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7","Type":"ContainerStarted","Data":"176fd06b2b95b1b8fcf653d3aa088d0b878ee79e5edf5e30ef47531138bd8918"} Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.362875 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.386909 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.394071 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.401187 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.401343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6g69\" (UniqueName: \"kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.401392 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.401430 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.401518 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.424864 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.480555 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rsngm"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.481667 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.494974 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.498266 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rsngm"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.497649 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-vgxxv" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.498244 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.502884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e72d0843-3802-4dbf-b292-8f37386cdeb5-scripts\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.502920 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-combined-ca-bundle\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.502956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.502976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw2vl\" (UniqueName: \"kubernetes.io/projected/e72d0843-3802-4dbf-b292-8f37386cdeb5-kube-api-access-qw2vl\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503034 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6g69\" (UniqueName: \"kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503069 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-log-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503184 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503203 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-ovn-controller-tls-certs\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.503955 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.505162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.505267 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.538410 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-glklh"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.540012 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.540356 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6g69\" (UniqueName: \"kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69\") pod \"dnsmasq-dns-6fb75c485f-bln6l\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.567752 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-glklh"] Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.603987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e72d0843-3802-4dbf-b292-8f37386cdeb5-scripts\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604030 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-combined-ca-bundle\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604055 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-log\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604079 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw2vl\" (UniqueName: \"kubernetes.io/projected/e72d0843-3802-4dbf-b292-8f37386cdeb5-kube-api-access-qw2vl\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad5ddd17-e280-4547-8d9a-afd3764a5f76-scripts\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6z8\" (UniqueName: \"kubernetes.io/projected/ad5ddd17-e280-4547-8d9a-afd3764a5f76-kube-api-access-fv6z8\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604152 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-run\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604177 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-log-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604229 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-lib\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604277 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-ovn-controller-tls-certs\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.604300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-etc-ovs\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.605930 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-log-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.606065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run-ovn\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.606149 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/e72d0843-3802-4dbf-b292-8f37386cdeb5-var-run\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.606893 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e72d0843-3802-4dbf-b292-8f37386cdeb5-scripts\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.630266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-combined-ca-bundle\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.635134 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/e72d0843-3802-4dbf-b292-8f37386cdeb5-ovn-controller-tls-certs\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.651976 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw2vl\" (UniqueName: \"kubernetes.io/projected/e72d0843-3802-4dbf-b292-8f37386cdeb5-kube-api-access-qw2vl\") pod \"ovn-controller-rsngm\" (UID: \"e72d0843-3802-4dbf-b292-8f37386cdeb5\") " pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710639 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-run\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710723 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-lib\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-etc-ovs\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710804 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-log\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad5ddd17-e280-4547-8d9a-afd3764a5f76-scripts\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.710860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6z8\" (UniqueName: \"kubernetes.io/projected/ad5ddd17-e280-4547-8d9a-afd3764a5f76-kube-api-access-fv6z8\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.711309 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-run\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.711486 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-lib\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.711583 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-etc-ovs\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.711652 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/ad5ddd17-e280-4547-8d9a-afd3764a5f76-var-log\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.713242 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ad5ddd17-e280-4547-8d9a-afd3764a5f76-scripts\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.731857 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.768918 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6z8\" (UniqueName: \"kubernetes.io/projected/ad5ddd17-e280-4547-8d9a-afd3764a5f76-kube-api-access-fv6z8\") pod \"ovn-controller-ovs-glklh\" (UID: \"ad5ddd17-e280-4547-8d9a-afd3764a5f76\") " pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.816322 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm" Dec 06 05:49:36 crc kubenswrapper[4958]: I1206 05:49:36.905087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:49:37 crc kubenswrapper[4958]: I1206 05:49:37.311261 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:49:37 crc kubenswrapper[4958]: W1206 05:49:37.393549 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc5f5f30f_ce7d_4002_b649_96fad9a67c3a.slice/crio-cd97ef17f7a8cf58a3d2d0da7786b82e64ce17226871fa601353cfddbe0b7ea5 WatchSource:0}: Error finding container cd97ef17f7a8cf58a3d2d0da7786b82e64ce17226871fa601353cfddbe0b7ea5: Status 404 returned error can't find the container with id cd97ef17f7a8cf58a3d2d0da7786b82e64ce17226871fa601353cfddbe0b7ea5 Dec 06 05:49:37 crc kubenswrapper[4958]: I1206 05:49:37.436363 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rsngm"] Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.069290 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-glklh"] Dec 06 05:49:38 crc kubenswrapper[4958]: W1206 05:49:38.091905 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad5ddd17_e280_4547_8d9a_afd3764a5f76.slice/crio-fed9b05a1a2a39d4203277723da1c02ccdc49828f21289ef6a55a820fce8b291 WatchSource:0}: Error finding container fed9b05a1a2a39d4203277723da1c02ccdc49828f21289ef6a55a820fce8b291: Status 404 returned error can't find the container with id fed9b05a1a2a39d4203277723da1c02ccdc49828f21289ef6a55a820fce8b291 Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.331803 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-2njn2"] Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.332794 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.345407 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.354249 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2njn2"] Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.371327 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-glklh" event={"ID":"ad5ddd17-e280-4547-8d9a-afd3764a5f76","Type":"ContainerStarted","Data":"fed9b05a1a2a39d4203277723da1c02ccdc49828f21289ef6a55a820fce8b291"} Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.373259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm" event={"ID":"e72d0843-3802-4dbf-b292-8f37386cdeb5","Type":"ContainerStarted","Data":"ce15d7fa12c23b766d6a6097aaf9fa65f892c3eecef37f6c5316f638b1e0703c"} Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.393295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" event={"ID":"c5f5f30f-ce7d-4002-b649-96fad9a67c3a","Type":"ContainerStarted","Data":"cd97ef17f7a8cf58a3d2d0da7786b82e64ce17226871fa601353cfddbe0b7ea5"} Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.470396 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e197012d-062b-4bac-90c1-63600d220add-config\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.470464 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-combined-ca-bundle\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.470500 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.470769 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovn-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.470901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovs-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.471113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbzqw\" (UniqueName: \"kubernetes.io/projected/e197012d-062b-4bac-90c1-63600d220add-kube-api-access-vbzqw\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovn-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572413 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovs-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572432 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbzqw\" (UniqueName: \"kubernetes.io/projected/e197012d-062b-4bac-90c1-63600d220add-kube-api-access-vbzqw\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e197012d-062b-4bac-90c1-63600d220add-config\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-combined-ca-bundle\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.572533 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.574070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovs-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.574082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/e197012d-062b-4bac-90c1-63600d220add-ovn-rundir\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.574944 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e197012d-062b-4bac-90c1-63600d220add-config\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.580173 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.585880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e197012d-062b-4bac-90c1-63600d220add-combined-ca-bundle\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.590246 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbzqw\" (UniqueName: \"kubernetes.io/projected/e197012d-062b-4bac-90c1-63600d220add-kube-api-access-vbzqw\") pod \"ovn-controller-metrics-2njn2\" (UID: \"e197012d-062b-4bac-90c1-63600d220add\") " pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:38 crc kubenswrapper[4958]: I1206 05:49:38.675668 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-2njn2" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.181125 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-2njn2"] Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.404087 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2njn2" event={"ID":"e197012d-062b-4bac-90c1-63600d220add","Type":"ContainerStarted","Data":"465b40fd583a5c76c0504214d8c6665ba526d15adb3e7bf67e15aeb6fd3ed65d"} Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.460787 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.462056 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.464877 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.465099 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.465288 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.465519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-nnjqw" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.495457 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587339 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587663 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587872 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587957 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-config\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.587986 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsdgg\" (UniqueName: \"kubernetes.io/projected/7719819d-5798-4ab7-bee0-cd8b736f92a2-kube-api-access-xsdgg\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.588130 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.588165 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.690549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.690637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.690666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.690742 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-config\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.690887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xsdgg\" (UniqueName: \"kubernetes.io/projected/7719819d-5798-4ab7-bee0-cd8b736f92a2-kube-api-access-xsdgg\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.691422 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-config\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692199 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/7719819d-5798-4ab7-bee0-cd8b736f92a2-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692358 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692398 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.692880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.697366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.698708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.701208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7719819d-5798-4ab7-bee0-cd8b736f92a2-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.707259 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xsdgg\" (UniqueName: \"kubernetes.io/projected/7719819d-5798-4ab7-bee0-cd8b736f92a2-kube-api-access-xsdgg\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.725261 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-sb-0\" (UID: \"7719819d-5798-4ab7-bee0-cd8b736f92a2\") " pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.786546 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.866622 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.866672 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.866707 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.867256 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:49:39 crc kubenswrapper[4958]: I1206 05:49:39.867305 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0" gracePeriod=600 Dec 06 05:49:40 crc kubenswrapper[4958]: I1206 05:49:40.372066 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 06 05:49:40 crc kubenswrapper[4958]: W1206 05:49:40.415988 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7719819d_5798_4ab7_bee0_cd8b736f92a2.slice/crio-ff79b4c26a2350f170dc94660f936db20e7eddeba6a0f55a65885278ada56c8c WatchSource:0}: Error finding container ff79b4c26a2350f170dc94660f936db20e7eddeba6a0f55a65885278ada56c8c: Status 404 returned error can't find the container with id ff79b4c26a2350f170dc94660f936db20e7eddeba6a0f55a65885278ada56c8c Dec 06 05:49:40 crc kubenswrapper[4958]: I1206 05:49:40.416990 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0" exitCode=0 Dec 06 05:49:40 crc kubenswrapper[4958]: I1206 05:49:40.417093 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0"} Dec 06 05:49:40 crc kubenswrapper[4958]: I1206 05:49:40.417806 4958 scope.go:117] "RemoveContainer" containerID="447872a5a977e9a4540447295b9e8d682cfa59e938b820b5b11dc85cbe8a56f7" Dec 06 05:49:41 crc kubenswrapper[4958]: I1206 05:49:41.450855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7719819d-5798-4ab7-bee0-cd8b736f92a2","Type":"ContainerStarted","Data":"ff79b4c26a2350f170dc94660f936db20e7eddeba6a0f55a65885278ada56c8c"} Dec 06 05:49:42 crc kubenswrapper[4958]: I1206 05:49:42.467043 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d"} Dec 06 05:49:46 crc kubenswrapper[4958]: I1206 05:49:46.516860 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerStarted","Data":"f282fffb05612709f58ea71b01582777eac971c0ce664dcfef9c958a92498c70"} Dec 06 05:50:04 crc kubenswrapper[4958]: I1206 05:50:04.672179 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb13a191-c616-4f16-82bd-138a1cd46032" containerID="f282fffb05612709f58ea71b01582777eac971c0ce664dcfef9c958a92498c70" exitCode=0 Dec 06 05:50:04 crc kubenswrapper[4958]: I1206 05:50:04.672283 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerDied","Data":"f282fffb05612709f58ea71b01582777eac971c0ce664dcfef9c958a92498c70"} Dec 06 05:50:17 crc kubenswrapper[4958]: E1206 05:50:17.621703 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = reading blob sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0: Get \"https://quay.io/v2/openstack-k8s-operators/openstack-network-exporter/blobs/sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0\": context canceled" image="quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7" Dec 06 05:50:17 crc kubenswrapper[4958]: E1206 05:50:17.622307 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n65bh5c7h85h55h55fh665h5ffh68h5dbh5ddh58ch65fh95hcbh657h5f7h57dh644h94h5h656h5dh686h84h68h5f9h66dh5b5h68h7fh677h5f6q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovs-rundir,ReadOnly:true,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-rundir,ReadOnly:true,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vbzqw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-metrics-2njn2_openstack(e197012d-062b-4bac-90c1-63600d220add): ErrImagePull: rpc error: code = Canceled desc = reading blob sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0: Get \"https://quay.io/v2/openstack-k8s-operators/openstack-network-exporter/blobs/sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0\": context canceled" logger="UnhandledError" Dec 06 05:50:17 crc kubenswrapper[4958]: E1206 05:50:17.623594 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"rpc error: code = Canceled desc = reading blob sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0: Get \\\"https://quay.io/v2/openstack-k8s-operators/openstack-network-exporter/blobs/sha256:1e02d32990adc4dad7c8927f91cca33a1baba746105504093311eb3b0b691fa0\\\": context canceled\"" pod="openstack/ovn-controller-metrics-2njn2" podUID="e197012d-062b-4bac-90c1-63600d220add" Dec 06 05:50:17 crc kubenswrapper[4958]: E1206 05:50:17.802867 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter@sha256:ecd56e6733c475f2d441344fd98f288c3eac0261ba113695fec7520a954ccbc7\\\"\"" pod="openstack/ovn-controller-metrics-2njn2" podUID="e197012d-062b-4bac-90c1-63600d220add" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.498799 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.499104 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.499333 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kz9rr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-notifications-server-0_openstack(74d63159-9580-4b70-ba89-74d4d9eeb7b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.500536 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-notifications-server-0" podUID="74d63159-9580-4b70-ba89-74d4d9eeb7b8" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.516187 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.516241 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.516377 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2b94v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(3141e77c-a73b-400b-b607-21be8537cca4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.517554 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3141e77c-a73b-400b-b607-21be8537cca4" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.835338 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-notifications-server-0" podUID="74d63159-9580-4b70-ba89-74d4d9eeb7b8" Dec 06 05:50:21 crc kubenswrapper[4958]: E1206 05:50:21.835774 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="3141e77c-a73b-400b-b607-21be8537cca4" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.064180 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.064598 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.064750 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:ovsdb-server-init,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current,Command:[/usr/local/bin/container-scripts/init-ovsdb-server.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h688h674h5cfh5chcbh587h697h686h69hbfh56bhdbh56fhd7hbdh5b6h667hcch674h579h54fh5cdh659h6h67dh5dbh66dh89h8ch68dh566q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-ovs,ReadOnly:false,MountPath:/etc/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log,ReadOnly:false,MountPath:/var/log/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib,ReadOnly:false,MountPath:/var/lib/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fv6z8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-ovs-glklh_openstack(ad5ddd17-e280-4547-8d9a-afd3764a5f76): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.067171 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-ovs-glklh" podUID="ad5ddd17-e280-4547-8d9a-afd3764a5f76" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.069784 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.069860 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.070021 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mxxfp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(4bc06a17-7bdb-4ee9-bad3-7996be041e54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.071285 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.889279 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-rabbitmq:current\\\"\"" pod="openstack/rabbitmq-server-0" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" Dec 06 05:50:27 crc kubenswrapper[4958]: E1206 05:50:27.889632 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdb-server-init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-base:current\\\"\"" pod="openstack/ovn-controller-ovs-glklh" podUID="ad5ddd17-e280-4547-8d9a-afd3764a5f76" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.505521 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.505576 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.505705 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hd47d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-cell1-galera-0_openstack(e2701d0b-9691-44fb-a540-796260e0f2c1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.505932 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.505971 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.506081 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7wszn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(0523eb0f-9fe1-49d4-a3b4-6a872317c136): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.506992 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-cell1-galera-0" podUID="e2701d0b-9691-44fb-a540-796260e0f2c1" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.508536 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="0523eb0f-9fe1-49d4-a3b4-6a872317c136" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.895716 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current\\\"\"" pod="openstack/openstack-galera-0" podUID="0523eb0f-9fe1-49d4-a3b4-6a872317c136" Dec 06 05:50:28 crc kubenswrapper[4958]: E1206 05:50:28.896048 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-mariadb:current\\\"\"" pod="openstack/openstack-cell1-galera-0" podUID="e2701d0b-9691-44fb-a540-796260e0f2c1" Dec 06 05:50:29 crc kubenswrapper[4958]: E1206 05:50:29.208565 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-memcached:current" Dec 06 05:50:29 crc kubenswrapper[4958]: E1206 05:50:29.208632 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-memcached:current" Dec 06 05:50:29 crc kubenswrapper[4958]: E1206 05:50:29.208794 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:memcached,Image:quay.rdoproject.org/podified-master-centos10/openstack-memcached:current,Command:[/usr/bin/dumb-init -- /usr/local/bin/kolla_start],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:memcached,HostPort:0,ContainerPort:11211,Protocol:TCP,HostIP:,},ContainerPort{Name:memcached-tls,HostPort:0,ContainerPort:11212,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:POD_IPS,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIPs,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:CONFIG_HASH,Value:n66h57h675h55dhbh5d4h59fh67chd4h5bh87hcbh564h674h6h548h5f9h9bhc9h555h5dbhdh9ch54bh57dh678hfdh587h644h5d5h5b4h5f4q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/src,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/certs/memcached.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:memcached-tls-certs,ReadOnly:true,MountPath:/var/lib/config-data/tls/private/memcached.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-82m8j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 11211 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42457,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42457,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod memcached-0_openstack(d1076143-5994-4717-9d22-a56c404bc73b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:29 crc kubenswrapper[4958]: E1206 05:50:29.210104 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/memcached-0" podUID="d1076143-5994-4717-9d22-a56c404bc73b" Dec 06 05:50:29 crc kubenswrapper[4958]: E1206 05:50:29.902257 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"memcached\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-memcached:current\\\"\"" pod="openstack/memcached-0" podUID="d1076143-5994-4717-9d22-a56c404bc73b" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.814135 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.814849 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.814998 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh59h578h67chf9h6h5cch694h9ch677h67fh657h5bfh65dh67fhb8h68dh5dfhf9h55bhcfh84h698h549h5b9h59bh5c8h647h557h9dh57bh5d5q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r6g69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-6fb75c485f-bln6l_openstack(c5f5f30f-ce7d-4002-b649-96fad9a67c3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.816335 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.861223 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.861280 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.861395 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c7h56dh5cfh8bh54fhbbhf4h5b9hdch67fhd7h55fh55fh6ch9h548h54ch665h647h6h8fhd6h5dfh5cdh58bh577h66fh695h5fbh55h77h5fcq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbkbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5449989c59-qwcjk_openstack(64da7fa6-1a07-438b-acb7-6eae565a3aa5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.862642 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" podUID="64da7fa6-1a07-438b-acb7-6eae565a3aa5" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.864352 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.864393 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.864500 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d6cxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-545d49fd5c-sv9vr_openstack(06826d6b-2bdb-47fa-8239-6ffe454d7ca2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.864925 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.864952 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.865045 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7z45s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-8468885bfc-kqpbq_openstack(d945d51a-1515-419b-9fe8-1303b1e56ef4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.866071 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" podUID="06826d6b-2bdb-47fa-8239-6ffe454d7ca2" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.866120 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" podUID="d945d51a-1515-419b-9fe8-1303b1e56ef4" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.882455 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.882529 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.882663 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ls7hn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-9bd5d9d8c-st47d_openstack(d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.883837 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" podUID="d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e" Dec 06 05:50:31 crc kubenswrapper[4958]: E1206 05:50:31.916857 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current\\\"\"" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.386815 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.387486 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.387680 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nddh57bh575h59dh696hb9h8bh698hd5h68ch7ch6bh694hcdh55bh5dchb9h5fdhbdhbfh5cbh5c5h5f4h688h679hf7hd6h58ch75h595h7dh54cq,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-svczs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.414094 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.420540 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.517364 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc\") pod \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.517436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z45s\" (UniqueName: \"kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s\") pod \"d945d51a-1515-419b-9fe8-1303b1e56ef4\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.517550 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config\") pod \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.517626 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config\") pod \"d945d51a-1515-419b-9fe8-1303b1e56ef4\" (UID: \"d945d51a-1515-419b-9fe8-1303b1e56ef4\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.517708 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbkbp\" (UniqueName: \"kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp\") pod \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\" (UID: \"64da7fa6-1a07-438b-acb7-6eae565a3aa5\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.518069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config" (OuterVolumeSpecName: "config") pod "64da7fa6-1a07-438b-acb7-6eae565a3aa5" (UID: "64da7fa6-1a07-438b-acb7-6eae565a3aa5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.518152 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "64da7fa6-1a07-438b-acb7-6eae565a3aa5" (UID: "64da7fa6-1a07-438b-acb7-6eae565a3aa5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.518258 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.518499 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config" (OuterVolumeSpecName: "config") pod "d945d51a-1515-419b-9fe8-1303b1e56ef4" (UID: "d945d51a-1515-419b-9fe8-1303b1e56ef4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.524038 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s" (OuterVolumeSpecName: "kube-api-access-7z45s") pod "d945d51a-1515-419b-9fe8-1303b1e56ef4" (UID: "d945d51a-1515-419b-9fe8-1303b1e56ef4"). InnerVolumeSpecName "kube-api-access-7z45s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.525097 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp" (OuterVolumeSpecName: "kube-api-access-cbkbp") pod "64da7fa6-1a07-438b-acb7-6eae565a3aa5" (UID: "64da7fa6-1a07-438b-acb7-6eae565a3aa5"). InnerVolumeSpecName "kube-api-access-cbkbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.620031 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/64da7fa6-1a07-438b-acb7-6eae565a3aa5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.620065 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z45s\" (UniqueName: \"kubernetes.io/projected/d945d51a-1515-419b-9fe8-1303b1e56ef4-kube-api-access-7z45s\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.620077 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d945d51a-1515-419b-9fe8-1303b1e56ef4-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.620087 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbkbp\" (UniqueName: \"kubernetes.io/projected/64da7fa6-1a07-438b-acb7-6eae565a3aa5-kube-api-access-cbkbp\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.652170 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.652236 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.652671 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovn-controller,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current,Command:[ovn-controller --pidfile unix:/run/openvswitch/db.sock --certificate=/etc/pki/tls/certs/ovndb.crt --private-key=/etc/pki/tls/private/ovndb.key --ca-cert=/etc/pki/tls/certs/ovndbca.crt],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n97h688h674h5cfh5chcbh587h697h686h69hbfh56bhdbh56fhd7hbdh5b6h667hcch674h579h54fh5cdh659h6h67dh5dbh66dh89h8ch68dh566q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-run,ReadOnly:false,MountPath:/var/run/openvswitch,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-ovn,ReadOnly:false,MountPath:/var/run/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-log-ovn,ReadOnly:false,MountPath:/var/log/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovn-controller-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qw2vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_liveness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/ovn_controller_readiness.sh],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/share/ovn/scripts/ovn-ctl stop_controller],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_ADMIN SYS_ADMIN SYS_NICE],Drop:[],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-controller-rsngm_openstack(e72d0843-3802-4dbf-b292-8f37386cdeb5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.653890 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovn-controller-rsngm" podUID="e72d0843-3802-4dbf-b292-8f37386cdeb5" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.875318 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.875360 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.875517 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-sb,Image:quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n58fh5fch596h584h598h648h67chd6hdbh57h86h556h58ch66dhb4h5c6h4h586h678h57ch56chf5h587h85h95h88hfch678h56dhc4h698h59q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-sb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xsdgg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-sb-0_openstack(7719819d-5798-4ab7-bee0-cd8b736f92a2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.883072 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.900811 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.925023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config\") pod \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.925081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls7hn\" (UniqueName: \"kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn\") pod \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.925562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc\") pod \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\" (UID: \"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e\") " Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.926265 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e" (UID: "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.926668 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config" (OuterVolumeSpecName: "config") pod "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e" (UID: "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.930728 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.930770 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.930875 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n66ch5f4h665h54fh68bh89h5cdh5ddh678h4h87h78h5cbhc9h9fh679hd4h554hbdh546hdh5dchb8h57fh5h54h56h74hb5h5b7h5fch66q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fwrzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-bcf47f659-fcmcv_openstack(c31aa749-ff33-4a9f-a9d6-de9575956925): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.931219 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn" (OuterVolumeSpecName: "kube-api-access-ls7hn") pod "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e" (UID: "d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e"). InnerVolumeSpecName "kube-api-access-ls7hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.932488 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" podUID="c31aa749-ff33-4a9f-a9d6-de9575956925" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.939566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" event={"ID":"06826d6b-2bdb-47fa-8239-6ffe454d7ca2","Type":"ContainerDied","Data":"d6522cce6073dad7c3e6aa97e6d9e78aa43f0904dadda49dc8363720cb987047"} Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.939658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-545d49fd5c-sv9vr" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.940489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" event={"ID":"d945d51a-1515-419b-9fe8-1303b1e56ef4","Type":"ContainerDied","Data":"5c93258c2dc0c2aeb08f8a17395dfd9a163f4cfdc4614351c63667b6a1633b14"} Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.940536 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8468885bfc-kqpbq" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.942497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" event={"ID":"d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e","Type":"ContainerDied","Data":"5056483f44d5c51b92b0eadc84f04e1a2364be2b46302b171e45122718232326"} Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.942532 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9bd5d9d8c-st47d" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.944054 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" Dec 06 05:50:34 crc kubenswrapper[4958]: I1206 05:50:34.944048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5449989c59-qwcjk" event={"ID":"64da7fa6-1a07-438b-acb7-6eae565a3aa5","Type":"ContainerDied","Data":"f36ef116341c4bfc018ca67342969ef4c73177179a2bb463513a3c2751eef7ea"} Dec 06 05:50:34 crc kubenswrapper[4958]: E1206 05:50:34.947813 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovn-controller\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-controller:current\\\"\"" pod="openstack/ovn-controller-rsngm" podUID="e72d0843-3802-4dbf-b292-8f37386cdeb5" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.026309 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.026901 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config\") pod \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.026999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6cxv\" (UniqueName: \"kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv\") pod \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.027034 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc\") pod \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\" (UID: \"06826d6b-2bdb-47fa-8239-6ffe454d7ca2\") " Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.027674 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.027691 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls7hn\" (UniqueName: \"kubernetes.io/projected/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-kube-api-access-ls7hn\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.027701 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.028186 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config" (OuterVolumeSpecName: "config") pod "06826d6b-2bdb-47fa-8239-6ffe454d7ca2" (UID: "06826d6b-2bdb-47fa-8239-6ffe454d7ca2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.029057 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "06826d6b-2bdb-47fa-8239-6ffe454d7ca2" (UID: "06826d6b-2bdb-47fa-8239-6ffe454d7ca2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.040016 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9bd5d9d8c-st47d"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.042838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv" (OuterVolumeSpecName: "kube-api-access-d6cxv") pod "06826d6b-2bdb-47fa-8239-6ffe454d7ca2" (UID: "06826d6b-2bdb-47fa-8239-6ffe454d7ca2"). InnerVolumeSpecName "kube-api-access-d6cxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.059629 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.077485 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5449989c59-qwcjk"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.097590 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.107216 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8468885bfc-kqpbq"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.129438 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.129487 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6cxv\" (UniqueName: \"kubernetes.io/projected/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-kube-api-access-d6cxv\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.129497 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/06826d6b-2bdb-47fa-8239-6ffe454d7ca2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.312414 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.322012 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-545d49fd5c-sv9vr"] Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.628926 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.628970 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb" Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.629092 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mvt45,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(636ba8d3-ffe7-42ba-85eb-0cd2da08036d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.630164 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.771959 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06826d6b-2bdb-47fa-8239-6ffe454d7ca2" path="/var/lib/kubelet/pods/06826d6b-2bdb-47fa-8239-6ffe454d7ca2/volumes" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.772569 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64da7fa6-1a07-438b-acb7-6eae565a3aa5" path="/var/lib/kubelet/pods/64da7fa6-1a07-438b-acb7-6eae565a3aa5/volumes" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.772896 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e" path="/var/lib/kubelet/pods/d1f60d2d-8caa-4bf4-a86d-43e90b93ab8e/volumes" Dec 06 05:50:35 crc kubenswrapper[4958]: I1206 05:50:35.773346 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d945d51a-1515-419b-9fe8-1303b1e56ef4" path="/var/lib/kubelet/pods/d945d51a-1515-419b-9fe8-1303b1e56ef4/volumes" Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.954239 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics@sha256:db384bf43222b066c378e77027a675d4cd9911107adba46c2922b3a55e10d6fb\\\"\"" pod="openstack/kube-state-metrics-0" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" Dec 06 05:50:35 crc kubenswrapper[4958]: E1206 05:50:35.955283 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-neutron-server:current\\\"\"" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" podUID="c31aa749-ff33-4a9f-a9d6-de9575956925" Dec 06 05:50:36 crc kubenswrapper[4958]: I1206 05:50:36.978074 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerStarted","Data":"205186b1485d10489f76be2c99797b21418d2198b72a3a63173ff7871aa7c422"} Dec 06 05:50:37 crc kubenswrapper[4958]: I1206 05:50:37.985598 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerStarted","Data":"5447173ef5fee4c34e757e67557ea57f1af67957b37f37d702d14e1373aa852f"} Dec 06 05:50:37 crc kubenswrapper[4958]: I1206 05:50:37.990653 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"74d63159-9580-4b70-ba89-74d4d9eeb7b8","Type":"ContainerStarted","Data":"e293fe4c558ae3d2c625d5386e44fa18ab0d16667732f5493c5d33f29013b20a"} Dec 06 05:50:38 crc kubenswrapper[4958]: E1206 05:50:38.420228 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-sb-0" podUID="7719819d-5798-4ab7-bee0-cd8b736f92a2" Dec 06 05:50:38 crc kubenswrapper[4958]: E1206 05:50:38.554132 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ovsdbserver-nb-0" podUID="7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.003421 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7","Type":"ContainerStarted","Data":"b090063cb8b78501fbeabd4ebc1498b24ad8f82a666efc79464c5cffe3ab3374"} Dec 06 05:50:39 crc kubenswrapper[4958]: E1206 05:50:39.006441 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.009185 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-2njn2" event={"ID":"e197012d-062b-4bac-90c1-63600d220add","Type":"ContainerStarted","Data":"605d586eedc7b6ba105faeaf54c011b3a25c9d82dff5ecad2d6867296df05ced"} Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.013988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerStarted","Data":"a2a4a0c97d15da14cfe482790a6839d80fd4cf54822cac49544b1a59f8f0f0ca"} Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.016386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7719819d-5798-4ab7-bee0-cd8b736f92a2","Type":"ContainerStarted","Data":"40b0eb9639a0bab8807a70c38b0a2e7d8da4b757fb9ccb95cc9953c6ec7b11f6"} Dec 06 05:50:39 crc kubenswrapper[4958]: E1206 05:50:39.018690 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="7719819d-5798-4ab7-bee0-cd8b736f92a2" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.050360 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-2njn2" podStartSLOduration=2.415757562 podStartE2EDuration="1m1.050342929s" podCreationTimestamp="2025-12-06 05:49:38 +0000 UTC" firstStartedPulling="2025-12-06 05:49:39.228361274 +0000 UTC m=+1289.762132037" lastFinishedPulling="2025-12-06 05:50:37.862946641 +0000 UTC m=+1348.396717404" observedRunningTime="2025-12-06 05:50:39.049133306 +0000 UTC m=+1349.582904079" watchObservedRunningTime="2025-12-06 05:50:39.050342929 +0000 UTC m=+1349.584113702" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.335599 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.376274 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.377611 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.381978 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.392412 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.419241 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64l8j\" (UniqueName: \"kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.419296 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.419331 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.419529 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.419603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.521321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.521415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64l8j\" (UniqueName: \"kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.521460 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.521527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.521573 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.522340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.522374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.522400 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.522680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.558590 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64l8j\" (UniqueName: \"kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j\") pod \"dnsmasq-dns-6dbf544cc9-q6cs8\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.659809 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.702194 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.723773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config\") pod \"c31aa749-ff33-4a9f-a9d6-de9575956925\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.723885 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc\") pod \"c31aa749-ff33-4a9f-a9d6-de9575956925\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.724307 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config" (OuterVolumeSpecName: "config") pod "c31aa749-ff33-4a9f-a9d6-de9575956925" (UID: "c31aa749-ff33-4a9f-a9d6-de9575956925"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.724328 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c31aa749-ff33-4a9f-a9d6-de9575956925" (UID: "c31aa749-ff33-4a9f-a9d6-de9575956925"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.724445 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwrzx\" (UniqueName: \"kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx\") pod \"c31aa749-ff33-4a9f-a9d6-de9575956925\" (UID: \"c31aa749-ff33-4a9f-a9d6-de9575956925\") " Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.725480 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.725498 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c31aa749-ff33-4a9f-a9d6-de9575956925-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.730586 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx" (OuterVolumeSpecName: "kube-api-access-fwrzx") pod "c31aa749-ff33-4a9f-a9d6-de9575956925" (UID: "c31aa749-ff33-4a9f-a9d6-de9575956925"). InnerVolumeSpecName "kube-api-access-fwrzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:39 crc kubenswrapper[4958]: I1206 05:50:39.827519 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwrzx\" (UniqueName: \"kubernetes.io/projected/c31aa749-ff33-4a9f-a9d6-de9575956925-kube-api-access-fwrzx\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:40 crc kubenswrapper[4958]: I1206 05:50:40.037435 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" Dec 06 05:50:40 crc kubenswrapper[4958]: I1206 05:50:40.037625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bcf47f659-fcmcv" event={"ID":"c31aa749-ff33-4a9f-a9d6-de9575956925","Type":"ContainerDied","Data":"d3f934eb075a4f9df71272c7cfe03993836c38ce4da9333e7c1ea78ab4724d1a"} Dec 06 05:50:40 crc kubenswrapper[4958]: E1206 05:50:40.041846 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-nb-db-server:current\\\"\"" pod="openstack/ovsdbserver-nb-0" podUID="7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7" Dec 06 05:50:40 crc kubenswrapper[4958]: E1206 05:50:40.041847 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovsdbserver-sb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ovn-sb-db-server:current\\\"\"" pod="openstack/ovsdbserver-sb-0" podUID="7719819d-5798-4ab7-bee0-cd8b736f92a2" Dec 06 05:50:40 crc kubenswrapper[4958]: I1206 05:50:40.109516 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:50:40 crc kubenswrapper[4958]: I1206 05:50:40.142463 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bcf47f659-fcmcv"] Dec 06 05:50:40 crc kubenswrapper[4958]: I1206 05:50:40.368336 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:50:40 crc kubenswrapper[4958]: W1206 05:50:40.376294 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod50541cab_c7c9_4995_af23_be9f98383190.slice/crio-96cf3b9c9edfa9682c265d79c169772561fe997419cca3e5e44098ce134756c3 WatchSource:0}: Error finding container 96cf3b9c9edfa9682c265d79c169772561fe997419cca3e5e44098ce134756c3: Status 404 returned error can't find the container with id 96cf3b9c9edfa9682c265d79c169772561fe997419cca3e5e44098ce134756c3 Dec 06 05:50:41 crc kubenswrapper[4958]: I1206 05:50:41.050712 4958 generic.go:334] "Generic (PLEG): container finished" podID="50541cab-c7c9-4995-af23-be9f98383190" containerID="b6d8411f568e9eb9be7d38836a6e46a8c3da66c43bab607ecc91490a2fdd5730" exitCode=0 Dec 06 05:50:41 crc kubenswrapper[4958]: I1206 05:50:41.050832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" event={"ID":"50541cab-c7c9-4995-af23-be9f98383190","Type":"ContainerDied","Data":"b6d8411f568e9eb9be7d38836a6e46a8c3da66c43bab607ecc91490a2fdd5730"} Dec 06 05:50:41 crc kubenswrapper[4958]: I1206 05:50:41.051299 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" event={"ID":"50541cab-c7c9-4995-af23-be9f98383190","Type":"ContainerStarted","Data":"96cf3b9c9edfa9682c265d79c169772561fe997419cca3e5e44098ce134756c3"} Dec 06 05:50:41 crc kubenswrapper[4958]: I1206 05:50:41.773815 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c31aa749-ff33-4a9f-a9d6-de9575956925" path="/var/lib/kubelet/pods/c31aa749-ff33-4a9f-a9d6-de9575956925/volumes" Dec 06 05:50:42 crc kubenswrapper[4958]: I1206 05:50:42.061670 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2701d0b-9691-44fb-a540-796260e0f2c1","Type":"ContainerStarted","Data":"aa507fd7f627316e73cc29156fe26bf86cf50a7b0c334c2185009a532be722a5"} Dec 06 05:50:42 crc kubenswrapper[4958]: I1206 05:50:42.065565 4958 generic.go:334] "Generic (PLEG): container finished" podID="ad5ddd17-e280-4547-8d9a-afd3764a5f76" containerID="832ff712b845874f2bed9b2ff7303c150376aa3a63326774954f757f59a0df8f" exitCode=0 Dec 06 05:50:42 crc kubenswrapper[4958]: I1206 05:50:42.065611 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-glklh" event={"ID":"ad5ddd17-e280-4547-8d9a-afd3764a5f76","Type":"ContainerDied","Data":"832ff712b845874f2bed9b2ff7303c150376aa3a63326774954f757f59a0df8f"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.074830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" event={"ID":"50541cab-c7c9-4995-af23-be9f98383190","Type":"ContainerStarted","Data":"8953855222bc97c678673cf2311b4fe6056f31482a095b9c45773cf3d5019b74"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.075698 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.077732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0523eb0f-9fe1-49d4-a3b4-6a872317c136","Type":"ContainerStarted","Data":"2ae3ef81eb9be8865f9e11a68e63a2f3f524b5e2ac821ba338d946e78d16e47d"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.080886 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-glklh" event={"ID":"ad5ddd17-e280-4547-8d9a-afd3764a5f76","Type":"ContainerStarted","Data":"979f85aee474d361f65e5afad9b1bb8c2e2f982601ca1e9453a55a66c4155759"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.080910 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-glklh" event={"ID":"ad5ddd17-e280-4547-8d9a-afd3764a5f76","Type":"ContainerStarted","Data":"b89b91b38e06e8664c9db0f6b7d1242bd9a80e2c1f345aa7bffef85140779249"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.081547 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.081581 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.083900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerStarted","Data":"39b81d558e96b6f8c8157e7141bfc4ad6b6aa1b87734554923bc5ffa5bfc3841"} Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.095968 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" podStartSLOduration=3.8486750130000003 podStartE2EDuration="4.09595126s" podCreationTimestamp="2025-12-06 05:50:39 +0000 UTC" firstStartedPulling="2025-12-06 05:50:40.378385497 +0000 UTC m=+1350.912156260" lastFinishedPulling="2025-12-06 05:50:40.625661744 +0000 UTC m=+1351.159432507" observedRunningTime="2025-12-06 05:50:43.091079179 +0000 UTC m=+1353.624849942" watchObservedRunningTime="2025-12-06 05:50:43.09595126 +0000 UTC m=+1353.629722023" Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.148598 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-glklh" podStartSLOduration=4.250668839 podStartE2EDuration="1m7.1485789s" podCreationTimestamp="2025-12-06 05:49:36 +0000 UTC" firstStartedPulling="2025-12-06 05:49:38.09619515 +0000 UTC m=+1288.629965913" lastFinishedPulling="2025-12-06 05:50:40.994105201 +0000 UTC m=+1351.527875974" observedRunningTime="2025-12-06 05:50:43.127560817 +0000 UTC m=+1353.661331580" watchObservedRunningTime="2025-12-06 05:50:43.1485789 +0000 UTC m=+1353.682349663" Dec 06 05:50:43 crc kubenswrapper[4958]: I1206 05:50:43.154159 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=3.811008386 podStartE2EDuration="1m12.154144959s" podCreationTimestamp="2025-12-06 05:49:31 +0000 UTC" firstStartedPulling="2025-12-06 05:49:33.56692733 +0000 UTC m=+1284.100698093" lastFinishedPulling="2025-12-06 05:50:41.910063903 +0000 UTC m=+1352.443834666" observedRunningTime="2025-12-06 05:50:43.147993995 +0000 UTC m=+1353.681764758" watchObservedRunningTime="2025-12-06 05:50:43.154144959 +0000 UTC m=+1353.687915722" Dec 06 05:50:45 crc kubenswrapper[4958]: I1206 05:50:45.114363 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerStarted","Data":"76adae9e6bc33af24601db302a5fbb54e7eca5cc92840c07b0d0761eed07ff77"} Dec 06 05:50:46 crc kubenswrapper[4958]: I1206 05:50:46.129184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d1076143-5994-4717-9d22-a56c404bc73b","Type":"ContainerStarted","Data":"d313acd039c7dd58755e6b4869db1a049ad1d3487cca37c4fb99db94b98cf2d9"} Dec 06 05:50:46 crc kubenswrapper[4958]: I1206 05:50:46.130123 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 06 05:50:46 crc kubenswrapper[4958]: I1206 05:50:46.131502 4958 generic.go:334] "Generic (PLEG): container finished" podID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerID="70bae99faa78f4415b2236696c5b7ba8fdb933eea5397b34e01609a00f77679a" exitCode=0 Dec 06 05:50:46 crc kubenswrapper[4958]: I1206 05:50:46.131558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" event={"ID":"c5f5f30f-ce7d-4002-b649-96fad9a67c3a","Type":"ContainerDied","Data":"70bae99faa78f4415b2236696c5b7ba8fdb933eea5397b34e01609a00f77679a"} Dec 06 05:50:46 crc kubenswrapper[4958]: I1206 05:50:46.161040 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=2.680176402 podStartE2EDuration="1m17.161020158s" podCreationTimestamp="2025-12-06 05:49:29 +0000 UTC" firstStartedPulling="2025-12-06 05:49:30.506597965 +0000 UTC m=+1281.040368728" lastFinishedPulling="2025-12-06 05:50:44.987441721 +0000 UTC m=+1355.521212484" observedRunningTime="2025-12-06 05:50:46.153413694 +0000 UTC m=+1356.687184457" watchObservedRunningTime="2025-12-06 05:50:46.161020158 +0000 UTC m=+1356.694790921" Dec 06 05:50:47 crc kubenswrapper[4958]: I1206 05:50:47.143054 4958 generic.go:334] "Generic (PLEG): container finished" podID="e2701d0b-9691-44fb-a540-796260e0f2c1" containerID="aa507fd7f627316e73cc29156fe26bf86cf50a7b0c334c2185009a532be722a5" exitCode=0 Dec 06 05:50:47 crc kubenswrapper[4958]: I1206 05:50:47.143123 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2701d0b-9691-44fb-a540-796260e0f2c1","Type":"ContainerDied","Data":"aa507fd7f627316e73cc29156fe26bf86cf50a7b0c334c2185009a532be722a5"} Dec 06 05:50:47 crc kubenswrapper[4958]: I1206 05:50:47.148316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" event={"ID":"c5f5f30f-ce7d-4002-b649-96fad9a67c3a","Type":"ContainerStarted","Data":"811a7f2f94f5edfdaeb8d18a3d6cdd1603a51f51e69268fcfacb218b453dd73a"} Dec 06 05:50:47 crc kubenswrapper[4958]: I1206 05:50:47.148656 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:50:47 crc kubenswrapper[4958]: I1206 05:50:47.199857 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" podStartSLOduration=-9223371965.654942 podStartE2EDuration="1m11.199834162s" podCreationTimestamp="2025-12-06 05:49:36 +0000 UTC" firstStartedPulling="2025-12-06 05:49:37.39881097 +0000 UTC m=+1287.932581723" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:50:47.19230035 +0000 UTC m=+1357.726071123" watchObservedRunningTime="2025-12-06 05:50:47.199834162 +0000 UTC m=+1357.733604935" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.063681 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.064123 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.067848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.173408 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm" event={"ID":"e72d0843-3802-4dbf-b292-8f37386cdeb5","Type":"ContainerStarted","Data":"eb8b1303e4871a919c69e35f7b31ee78bc8126644dcb00570a8d2613df601702"} Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.173789 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-rsngm" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.178290 4958 generic.go:334] "Generic (PLEG): container finished" podID="0523eb0f-9fe1-49d4-a3b4-6a872317c136" containerID="2ae3ef81eb9be8865f9e11a68e63a2f3f524b5e2ac821ba338d946e78d16e47d" exitCode=0 Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.178440 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0523eb0f-9fe1-49d4-a3b4-6a872317c136","Type":"ContainerDied","Data":"2ae3ef81eb9be8865f9e11a68e63a2f3f524b5e2ac821ba338d946e78d16e47d"} Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.181339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e2701d0b-9691-44fb-a540-796260e0f2c1","Type":"ContainerStarted","Data":"e8d59a8a10e188aace7efbda9bc14a2927aef5929a08e103d83cb56f73afe186"} Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.184016 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.206002 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rsngm" podStartSLOduration=2.687751576 podStartE2EDuration="1m12.205973711s" podCreationTimestamp="2025-12-06 05:49:36 +0000 UTC" firstStartedPulling="2025-12-06 05:49:37.460233224 +0000 UTC m=+1287.994003987" lastFinishedPulling="2025-12-06 05:50:46.978455369 +0000 UTC m=+1357.512226122" observedRunningTime="2025-12-06 05:50:48.201782329 +0000 UTC m=+1358.735553092" watchObservedRunningTime="2025-12-06 05:50:48.205973711 +0000 UTC m=+1358.739744494" Dec 06 05:50:48 crc kubenswrapper[4958]: I1206 05:50:48.225013 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=9.465683713 podStartE2EDuration="1m20.224988661s" podCreationTimestamp="2025-12-06 05:49:28 +0000 UTC" firstStartedPulling="2025-12-06 05:49:30.236864868 +0000 UTC m=+1280.770635631" lastFinishedPulling="2025-12-06 05:50:40.996169806 +0000 UTC m=+1351.529940579" observedRunningTime="2025-12-06 05:50:48.223896622 +0000 UTC m=+1358.757667395" watchObservedRunningTime="2025-12-06 05:50:48.224988661 +0000 UTC m=+1358.758759424" Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.205588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0523eb0f-9fe1-49d4-a3b4-6a872317c136","Type":"ContainerStarted","Data":"6c6666557daa9adf731d0be88b070e2ff8dc57ec8ccb453b3ce00dd609ccd3a1"} Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.438746 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.438798 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.703752 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.733114 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371953.121687 podStartE2EDuration="1m23.733088595s" podCreationTimestamp="2025-12-06 05:49:26 +0000 UTC" firstStartedPulling="2025-12-06 05:49:28.610017048 +0000 UTC m=+1279.143787811" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:50:49.244736885 +0000 UTC m=+1359.778507658" watchObservedRunningTime="2025-12-06 05:50:49.733088595 +0000 UTC m=+1360.266859368" Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.790267 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:50:49 crc kubenswrapper[4958]: I1206 05:50:49.790541 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="dnsmasq-dns" containerID="cri-o://811a7f2f94f5edfdaeb8d18a3d6cdd1603a51f51e69268fcfacb218b453dd73a" gracePeriod=10 Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.215195 4958 generic.go:334] "Generic (PLEG): container finished" podID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerID="811a7f2f94f5edfdaeb8d18a3d6cdd1603a51f51e69268fcfacb218b453dd73a" exitCode=0 Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.215461 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" event={"ID":"c5f5f30f-ce7d-4002-b649-96fad9a67c3a","Type":"ContainerDied","Data":"811a7f2f94f5edfdaeb8d18a3d6cdd1603a51f51e69268fcfacb218b453dd73a"} Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.216690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"636ba8d3-ffe7-42ba-85eb-0cd2da08036d","Type":"ContainerStarted","Data":"910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2"} Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.217627 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.235300 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.080521154 podStartE2EDuration="1m19.235285557s" podCreationTimestamp="2025-12-06 05:49:31 +0000 UTC" firstStartedPulling="2025-12-06 05:49:33.001224713 +0000 UTC m=+1283.534995486" lastFinishedPulling="2025-12-06 05:50:49.155989106 +0000 UTC m=+1359.689759889" observedRunningTime="2025-12-06 05:50:50.230826177 +0000 UTC m=+1360.764596940" watchObservedRunningTime="2025-12-06 05:50:50.235285557 +0000 UTC m=+1360.769056320" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.321879 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.391557 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6g69\" (UniqueName: \"kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69\") pod \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.391609 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb\") pod \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.391645 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc\") pod \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.391729 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config\") pod \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\" (UID: \"c5f5f30f-ce7d-4002-b649-96fad9a67c3a\") " Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.397289 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69" (OuterVolumeSpecName: "kube-api-access-r6g69") pod "c5f5f30f-ce7d-4002-b649-96fad9a67c3a" (UID: "c5f5f30f-ce7d-4002-b649-96fad9a67c3a"). InnerVolumeSpecName "kube-api-access-r6g69". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.432305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config" (OuterVolumeSpecName: "config") pod "c5f5f30f-ce7d-4002-b649-96fad9a67c3a" (UID: "c5f5f30f-ce7d-4002-b649-96fad9a67c3a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.433362 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5f5f30f-ce7d-4002-b649-96fad9a67c3a" (UID: "c5f5f30f-ce7d-4002-b649-96fad9a67c3a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.435402 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5f5f30f-ce7d-4002-b649-96fad9a67c3a" (UID: "c5f5f30f-ce7d-4002-b649-96fad9a67c3a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.466710 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.466960 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="prometheus" containerID="cri-o://205186b1485d10489f76be2c99797b21418d2198b72a3a63173ff7871aa7c422" gracePeriod=600 Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.467293 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="thanos-sidecar" containerID="cri-o://39b81d558e96b6f8c8157e7141bfc4ad6b6aa1b87734554923bc5ffa5bfc3841" gracePeriod=600 Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.467345 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="config-reloader" containerID="cri-o://a2a4a0c97d15da14cfe482790a6839d80fd4cf54822cac49544b1a59f8f0f0ca" gracePeriod=600 Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.493819 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r6g69\" (UniqueName: \"kubernetes.io/projected/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-kube-api-access-r6g69\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.493854 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.493864 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:50 crc kubenswrapper[4958]: I1206 05:50:50.493899 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f5f30f-ce7d-4002-b649-96fad9a67c3a-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.243312 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" event={"ID":"c5f5f30f-ce7d-4002-b649-96fad9a67c3a","Type":"ContainerDied","Data":"cd97ef17f7a8cf58a3d2d0da7786b82e64ce17226871fa601353cfddbe0b7ea5"} Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.243709 4958 scope.go:117] "RemoveContainer" containerID="811a7f2f94f5edfdaeb8d18a3d6cdd1603a51f51e69268fcfacb218b453dd73a" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.243327 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6fb75c485f-bln6l" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.253934 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb13a191-c616-4f16-82bd-138a1cd46032" containerID="39b81d558e96b6f8c8157e7141bfc4ad6b6aa1b87734554923bc5ffa5bfc3841" exitCode=0 Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.253953 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb13a191-c616-4f16-82bd-138a1cd46032" containerID="a2a4a0c97d15da14cfe482790a6839d80fd4cf54822cac49544b1a59f8f0f0ca" exitCode=0 Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.253960 4958 generic.go:334] "Generic (PLEG): container finished" podID="cb13a191-c616-4f16-82bd-138a1cd46032" containerID="205186b1485d10489f76be2c99797b21418d2198b72a3a63173ff7871aa7c422" exitCode=0 Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.254012 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerDied","Data":"39b81d558e96b6f8c8157e7141bfc4ad6b6aa1b87734554923bc5ffa5bfc3841"} Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.254064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerDied","Data":"a2a4a0c97d15da14cfe482790a6839d80fd4cf54822cac49544b1a59f8f0f0ca"} Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.254078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerDied","Data":"205186b1485d10489f76be2c99797b21418d2198b72a3a63173ff7871aa7c422"} Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.269968 4958 scope.go:117] "RemoveContainer" containerID="70bae99faa78f4415b2236696c5b7ba8fdb933eea5397b34e01609a00f77679a" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.313422 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.332116 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6fb75c485f-bln6l"] Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.479411 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510242 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510296 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tqggf\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510366 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510484 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510525 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.510960 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.511018 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.511207 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"cb13a191-c616-4f16-82bd-138a1cd46032\" (UID: \"cb13a191-c616-4f16-82bd-138a1cd46032\") " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.511796 4958 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/cb13a191-c616-4f16-82bd-138a1cd46032-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.515211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.515676 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.518173 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config" (OuterVolumeSpecName: "config") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.518356 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf" (OuterVolumeSpecName: "kube-api-access-tqggf") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "kube-api-access-tqggf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.521828 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out" (OuterVolumeSpecName: "config-out") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.554163 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.557698 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config" (OuterVolumeSpecName: "web-config") pod "cb13a191-c616-4f16-82bd-138a1cd46032" (UID: "cb13a191-c616-4f16-82bd-138a1cd46032"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613187 4958 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-web-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613212 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613242 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") on node \"crc\" " Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613272 4958 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/cb13a191-c616-4f16-82bd-138a1cd46032-config-out\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613284 4958 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/cb13a191-c616-4f16-82bd-138a1cd46032-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613292 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tqggf\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-kube-api-access-tqggf\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.613301 4958 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/cb13a191-c616-4f16-82bd-138a1cd46032-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.631523 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.631943 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82") on node "crc" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.714355 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") on node \"crc\" DevicePath \"\"" Dec 06 05:50:51 crc kubenswrapper[4958]: I1206 05:50:51.773056 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" path="/var/lib/kubelet/pods/c5f5f30f-ce7d-4002-b649-96fad9a67c3a/volumes" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.265617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"cb13a191-c616-4f16-82bd-138a1cd46032","Type":"ContainerDied","Data":"5116d0f79179dcb89c87456d6085279504593b4364fdecd27adc5c860f6ff932"} Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.265673 4958 scope.go:117] "RemoveContainer" containerID="39b81d558e96b6f8c8157e7141bfc4ad6b6aa1b87734554923bc5ffa5bfc3841" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.265711 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.303989 4958 scope.go:117] "RemoveContainer" containerID="a2a4a0c97d15da14cfe482790a6839d80fd4cf54822cac49544b1a59f8f0f0ca" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.305120 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.310819 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.324597 4958 scope.go:117] "RemoveContainer" containerID="205186b1485d10489f76be2c99797b21418d2198b72a3a63173ff7871aa7c422" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.337658 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.338868 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="init-config-reloader" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.338889 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="init-config-reloader" Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.338901 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="thanos-sidecar" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.338994 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="thanos-sidecar" Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.339004 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="dnsmasq-dns" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339010 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="dnsmasq-dns" Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.339028 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="prometheus" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339034 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="prometheus" Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.339045 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="config-reloader" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339051 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="config-reloader" Dec 06 05:50:52 crc kubenswrapper[4958]: E1206 05:50:52.339058 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="init" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339064 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="init" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339296 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5f5f30f-ce7d-4002-b649-96fad9a67c3a" containerName="dnsmasq-dns" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339312 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="prometheus" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339333 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="config-reloader" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.339344 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" containerName="thanos-sidecar" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.343283 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.346309 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.346347 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.346309 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvqpz" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.346506 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.346665 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.350545 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.355815 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.358277 4958 scope.go:117] "RemoveContainer" containerID="f282fffb05612709f58ea71b01582777eac971c0ce664dcfef9c958a92498c70" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.360797 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425317 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425342 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425390 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425776 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvmqx\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.425832 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.526911 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.526962 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527042 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527075 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527094 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvmqx\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527125 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527195 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.527292 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.528456 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.532460 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.534349 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.534391 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7e3431a5196bc2909b8ac76f6bdd967b074e179639976d5fb690f175bc86873e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.535394 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.537608 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.538965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.540396 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.541368 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.541889 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.547908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.552338 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvmqx\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.573315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:52 crc kubenswrapper[4958]: I1206 05:50:52.749557 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 05:50:53 crc kubenswrapper[4958]: I1206 05:50:53.213576 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 05:50:53 crc kubenswrapper[4958]: W1206 05:50:53.214854 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9711939_7159_4a5a_970f_426286de1f36.slice/crio-7487cdadfce1e19f5b0795cb2ddb1764507bc6b05a71d5ffa674378a488f1aff WatchSource:0}: Error finding container 7487cdadfce1e19f5b0795cb2ddb1764507bc6b05a71d5ffa674378a488f1aff: Status 404 returned error can't find the container with id 7487cdadfce1e19f5b0795cb2ddb1764507bc6b05a71d5ffa674378a488f1aff Dec 06 05:50:53 crc kubenswrapper[4958]: I1206 05:50:53.283277 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerStarted","Data":"7487cdadfce1e19f5b0795cb2ddb1764507bc6b05a71d5ffa674378a488f1aff"} Dec 06 05:50:53 crc kubenswrapper[4958]: I1206 05:50:53.772263 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb13a191-c616-4f16-82bd-138a1cd46032" path="/var/lib/kubelet/pods/cb13a191-c616-4f16-82bd-138a1cd46032/volumes" Dec 06 05:50:54 crc kubenswrapper[4958]: I1206 05:50:54.807373 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 06 05:50:55 crc kubenswrapper[4958]: I1206 05:50:55.649807 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 06 05:50:55 crc kubenswrapper[4958]: I1206 05:50:55.789264 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e2701d0b-9691-44fb-a540-796260e0f2c1" containerName="galera" probeResult="failure" output=< Dec 06 05:50:55 crc kubenswrapper[4958]: wsrep_local_state_comment (Joined) differs from Synced Dec 06 05:50:55 crc kubenswrapper[4958]: > Dec 06 05:50:56 crc kubenswrapper[4958]: I1206 05:50:56.309148 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"7719819d-5798-4ab7-bee0-cd8b736f92a2","Type":"ContainerStarted","Data":"66ff0ce534bb0d035173bd83efa9bc9cd1c4dc6ee3bf869f2015b387d63c41e3"} Dec 06 05:50:56 crc kubenswrapper[4958]: I1206 05:50:56.311968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7","Type":"ContainerStarted","Data":"87c20496ec2c2e08e4934b23ab2c9d5d73a863fe55d19e028b7633bab28b3938"} Dec 06 05:50:56 crc kubenswrapper[4958]: I1206 05:50:56.357439 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=3.176813433 podStartE2EDuration="1m18.357420067s" podCreationTimestamp="2025-12-06 05:49:38 +0000 UTC" firstStartedPulling="2025-12-06 05:49:40.42230094 +0000 UTC m=+1290.956071703" lastFinishedPulling="2025-12-06 05:50:55.602907574 +0000 UTC m=+1366.136678337" observedRunningTime="2025-12-06 05:50:56.336057384 +0000 UTC m=+1366.869828137" watchObservedRunningTime="2025-12-06 05:50:56.357420067 +0000 UTC m=+1366.891190830" Dec 06 05:50:56 crc kubenswrapper[4958]: I1206 05:50:56.359021 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=2.944864016 podStartE2EDuration="1m22.359015179s" podCreationTimestamp="2025-12-06 05:49:34 +0000 UTC" firstStartedPulling="2025-12-06 05:49:36.071817543 +0000 UTC m=+1286.605588306" lastFinishedPulling="2025-12-06 05:50:55.485968716 +0000 UTC m=+1366.019739469" observedRunningTime="2025-12-06 05:50:56.35529198 +0000 UTC m=+1366.889062743" watchObservedRunningTime="2025-12-06 05:50:56.359015179 +0000 UTC m=+1366.892785942" Dec 06 05:50:56 crc kubenswrapper[4958]: I1206 05:50:56.424350 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 06 05:50:57 crc kubenswrapper[4958]: I1206 05:50:57.322259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerStarted","Data":"4de2f9851b0cd869a3b1e38b11656b9615a5705b96987fd08d1fb5de2e134abe"} Dec 06 05:50:57 crc kubenswrapper[4958]: I1206 05:50:57.787513 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 06 05:50:58 crc kubenswrapper[4958]: I1206 05:50:58.012832 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 06 05:50:58 crc kubenswrapper[4958]: I1206 05:50:58.012893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 06 05:50:58 crc kubenswrapper[4958]: I1206 05:50:58.143944 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 06 05:50:58 crc kubenswrapper[4958]: I1206 05:50:58.435309 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.471414 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-65dd-account-create-update-qtz4w"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.472450 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.474299 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.480527 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65dd-account-create-update-qtz4w"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.496594 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-mdznl"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.498319 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.509926 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.510964 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.521754 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mdznl"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.582602 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.582690 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.582833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnsvl\" (UniqueName: \"kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.583024 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv6p2\" (UniqueName: \"kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.606854 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.684219 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv6p2\" (UniqueName: \"kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.684662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.684786 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.684934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnsvl\" (UniqueName: \"kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.685658 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.709408 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnsvl\" (UniqueName: \"kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl\") pod \"keystone-db-create-mdznl\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.710195 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv6p2\" (UniqueName: \"kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.779933 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-gl9kl"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.781695 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gl9kl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.787096 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.796704 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-4069-account-create-update-b92cg"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.797839 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.800309 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.811016 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gl9kl"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.816943 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4069-account-create-update-b92cg"] Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.845505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mdznl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.888311 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.888360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptdj8\" (UniqueName: \"kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.888413 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.888678 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plk5p\" (UniqueName: \"kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.990448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.991634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptdj8\" (UniqueName: \"kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.991688 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.991859 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-plk5p\" (UniqueName: \"kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.991569 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:50:59 crc kubenswrapper[4958]: I1206 05:50:59.992896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.008029 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-plk5p\" (UniqueName: \"kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p\") pod \"placement-4069-account-create-update-b92cg\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.008229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptdj8\" (UniqueName: \"kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8\") pod \"placement-db-create-gl9kl\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " pod="openstack/placement-db-create-gl9kl" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.101520 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gl9kl" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.114521 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.413103 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.778019 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts\") pod \"keystone-65dd-account-create-update-qtz4w\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.836814 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 06 05:51:00 crc kubenswrapper[4958]: I1206 05:51:00.904462 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.033096 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.144786 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.147645 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.151294 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.151559 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.152457 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.152612 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-x8dcp" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.159226 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.214863 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.214940 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgwmf\" (UniqueName: \"kubernetes.io/projected/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-kube-api-access-hgwmf\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.214969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-scripts\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.214988 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.215040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.215078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.215123 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-config\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.316903 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317191 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-config\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317268 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgwmf\" (UniqueName: \"kubernetes.io/projected/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-kube-api-access-hgwmf\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317293 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-scripts\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317311 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317350 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.317429 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.318255 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-scripts\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.319159 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-config\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.325760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.326239 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.327116 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.335018 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-4069-account-create-update-b92cg"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.344218 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgwmf\" (UniqueName: \"kubernetes.io/projected/16a5cde7-0ad2-4f04-9643-d6ceca21fe3c-kube-api-access-hgwmf\") pod \"ovn-northd-0\" (UID: \"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c\") " pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: W1206 05:51:01.359071 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda573deaf_5c61_4ca3_97a6_c29d9ea40c29.slice/crio-2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589 WatchSource:0}: Error finding container 2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589: Status 404 returned error can't find the container with id 2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589 Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.359626 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-gl9kl"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.374925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4069-account-create-update-b92cg" event={"ID":"5eddd3fa-04bf-46c5-984a-c5227d951195","Type":"ContainerStarted","Data":"f4603bbde3431e59684854fc06ebc9268cf9367ab0213e2f595f1165aade452e"} Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.392087 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-mdznl"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.488293 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.668922 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-65dd-account-create-update-qtz4w"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.701426 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.869086 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.870488 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.888065 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.958032 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.958112 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.958142 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.958166 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn5jr\" (UniqueName: \"kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:01 crc kubenswrapper[4958]: I1206 05:51:01.958329 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.054278 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-create-nq6qf"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.055636 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.059941 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060041 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060142 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060180 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn5jr\" (UniqueName: \"kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060217 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smjrc\" (UniqueName: \"kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.060256 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.062350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.063349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.063049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.068393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.076737 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-9aa9-account-create-update-pvnth"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.081955 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.085993 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-db-secret" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.093750 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nq6qf"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.105495 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn5jr\" (UniqueName: \"kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr\") pod \"dnsmasq-dns-76f9c4c8bc-96l4z\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.130925 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9aa9-account-create-update-pvnth"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.161489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crtr7\" (UniqueName: \"kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.161587 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smjrc\" (UniqueName: \"kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.161619 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.161675 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.162700 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.172664 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.192542 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smjrc\" (UniqueName: \"kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc\") pod \"watcher-db-create-nq6qf\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.262885 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.264550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.264608 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crtr7\" (UniqueName: \"kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.265451 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.284024 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crtr7\" (UniqueName: \"kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7\") pod \"watcher-9aa9-account-create-update-pvnth\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.377552 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.397077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c","Type":"ContainerStarted","Data":"bc0d5a63838be5147d6eff76badf874df75eb4e9cdd64750e00253d7b5b4c7f7"} Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.398760 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mdznl" event={"ID":"88549045-eefd-497a-b779-8689cec8daa9","Type":"ContainerStarted","Data":"0678de909dc697a20d7e69f09b7fef1504f954b76aebc6c10c43742398ebc73a"} Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.404916 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65dd-account-create-update-qtz4w" event={"ID":"674bc2ce-08e0-49b2-850c-2a00e8e38faa","Type":"ContainerStarted","Data":"a0b8e68164c7816d71c8be39fa06fd2d54aedd9e5737a67a3391e2bff8f8263a"} Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.406523 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gl9kl" event={"ID":"a573deaf-5c61-4ca3-97a6-c29d9ea40c29","Type":"ContainerStarted","Data":"2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589"} Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.498522 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.986117 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.991302 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.994066 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.994420 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.996422 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-2d7tv" Dec 06 05:51:02 crc kubenswrapper[4958]: I1206 05:51:02.997565 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.029228 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.066368 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9g29t"] Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.067460 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.069311 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.070144 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.070675 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.095508 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9g29t"] Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184674 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184734 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184785 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184831 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-cache\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184863 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzq2z\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-kube-api-access-bzq2z\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184924 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-lock\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.184957 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.185001 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.185045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.185067 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wltsk\" (UniqueName: \"kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.185113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:03 crc kubenswrapper[4958]: I1206 05:51:03.187428 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.288938 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-cache\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bzq2z\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-kube-api-access-bzq2z\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289076 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-lock\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289180 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289221 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wltsk\" (UniqueName: \"kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289310 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289348 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289382 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289448 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.289737 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-cache\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.290555 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-lock\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.290760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.290769 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.290782 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.290824 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:03.790807833 +0000 UTC m=+1374.324578686 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.290861 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.291242 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.291780 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.298939 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.299240 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.309041 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.315882 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wltsk\" (UniqueName: \"kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk\") pod \"swift-ring-rebalance-9g29t\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.322627 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.335817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bzq2z\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-kube-api-access-bzq2z\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.385900 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.414590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gl9kl" event={"ID":"a573deaf-5c61-4ca3-97a6-c29d9ea40c29","Type":"ContainerStarted","Data":"6ebf32502dd4f048793851af86282d35f1487580821032ee42bcdba69669c4f6"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.416610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4069-account-create-update-b92cg" event={"ID":"5eddd3fa-04bf-46c5-984a-c5227d951195","Type":"ContainerStarted","Data":"44f22053ed1a81671539c116b8b7ae5052de56ce4c6eb3c516c39ed75b88d61c"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:03.798550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.798779 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.799001 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:03.799069 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:04.799049525 +0000 UTC m=+1375.332820298 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:04.424625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mdznl" event={"ID":"88549045-eefd-497a-b779-8689cec8daa9","Type":"ContainerStarted","Data":"e309a834f9840ea280266c8e444d54ef489385c6d1f30a72f6321342c0c7d2fa"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:04.814652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:04.814962 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:04.814978 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:04.815025 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:06.815007925 +0000 UTC m=+1377.348778688 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.450784 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9711939-7159-4a5a-970f-426286de1f36" containerID="4de2f9851b0cd869a3b1e38b11656b9615a5705b96987fd08d1fb5de2e134abe" exitCode=0 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.450864 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerDied","Data":"4de2f9851b0cd869a3b1e38b11656b9615a5705b96987fd08d1fb5de2e134abe"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.458084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65dd-account-create-update-qtz4w" event={"ID":"674bc2ce-08e0-49b2-850c-2a00e8e38faa","Type":"ContainerStarted","Data":"d5afa864150a18667071d0b2215ff211199f473c07058f36086b431c0ed278db"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.531420 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-4069-account-create-update-b92cg" podStartSLOduration=6.5314060739999995 podStartE2EDuration="6.531406074s" podCreationTimestamp="2025-12-06 05:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:05.506849686 +0000 UTC m=+1376.040620449" watchObservedRunningTime="2025-12-06 05:51:05.531406074 +0000 UTC m=+1376.065176837" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.547514 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-mdznl" podStartSLOduration=6.547463876 podStartE2EDuration="6.547463876s" podCreationTimestamp="2025-12-06 05:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:05.523543774 +0000 UTC m=+1376.057314537" watchObservedRunningTime="2025-12-06 05:51:05.547463876 +0000 UTC m=+1376.081234639" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.562796 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-gl9kl" podStartSLOduration=6.562779627 podStartE2EDuration="6.562779627s" podCreationTimestamp="2025-12-06 05:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:05.540295424 +0000 UTC m=+1376.074066187" watchObservedRunningTime="2025-12-06 05:51:05.562779627 +0000 UTC m=+1376.096550390" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:05.572997 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-65dd-account-create-update-qtz4w" podStartSLOduration=6.572980021 podStartE2EDuration="6.572980021s" podCreationTimestamp="2025-12-06 05:50:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:05.557215848 +0000 UTC m=+1376.090986611" watchObservedRunningTime="2025-12-06 05:51:05.572980021 +0000 UTC m=+1376.106750784" Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:06.469496 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerStarted","Data":"14c77bc787711132eb1716198943546be558491097489218c137e60554577819"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:06.851292 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:06.851592 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:06.851781 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: E1206 05:51:06.851847 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:10.851822878 +0000 UTC m=+1381.385593641 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.478718 4958 generic.go:334] "Generic (PLEG): container finished" podID="674bc2ce-08e0-49b2-850c-2a00e8e38faa" containerID="d5afa864150a18667071d0b2215ff211199f473c07058f36086b431c0ed278db" exitCode=0 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.478792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65dd-account-create-update-qtz4w" event={"ID":"674bc2ce-08e0-49b2-850c-2a00e8e38faa","Type":"ContainerDied","Data":"d5afa864150a18667071d0b2215ff211199f473c07058f36086b431c0ed278db"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.481688 4958 generic.go:334] "Generic (PLEG): container finished" podID="a573deaf-5c61-4ca3-97a6-c29d9ea40c29" containerID="6ebf32502dd4f048793851af86282d35f1487580821032ee42bcdba69669c4f6" exitCode=0 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.481742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gl9kl" event={"ID":"a573deaf-5c61-4ca3-97a6-c29d9ea40c29","Type":"ContainerDied","Data":"6ebf32502dd4f048793851af86282d35f1487580821032ee42bcdba69669c4f6"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.483703 4958 generic.go:334] "Generic (PLEG): container finished" podID="5eddd3fa-04bf-46c5-984a-c5227d951195" containerID="44f22053ed1a81671539c116b8b7ae5052de56ce4c6eb3c516c39ed75b88d61c" exitCode=0 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.483751 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4069-account-create-update-b92cg" event={"ID":"5eddd3fa-04bf-46c5-984a-c5227d951195","Type":"ContainerDied","Data":"44f22053ed1a81671539c116b8b7ae5052de56ce4c6eb3c516c39ed75b88d61c"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.485175 4958 generic.go:334] "Generic (PLEG): container finished" podID="88549045-eefd-497a-b779-8689cec8daa9" containerID="e309a834f9840ea280266c8e444d54ef489385c6d1f30a72f6321342c0c7d2fa" exitCode=0 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.485207 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mdznl" event={"ID":"88549045-eefd-497a-b779-8689cec8daa9","Type":"ContainerDied","Data":"e309a834f9840ea280266c8e444d54ef489385c6d1f30a72f6321342c0c7d2fa"} Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.634090 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-9aa9-account-create-update-pvnth"] Dec 06 05:51:07 crc kubenswrapper[4958]: W1206 05:51:07.636504 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podef244a26_b26b_4f8d_addc_9772f3134412.slice/crio-d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207 WatchSource:0}: Error finding container d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207: Status 404 returned error can't find the container with id d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207 Dec 06 05:51:07 crc kubenswrapper[4958]: W1206 05:51:07.638675 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c3fe7c7_dcce_48c1_b71d_bc59f7fd1b12.slice/crio-c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657 WatchSource:0}: Error finding container c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657: Status 404 returned error can't find the container with id c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.642357 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-create-nq6qf"] Dec 06 05:51:07 crc kubenswrapper[4958]: W1206 05:51:07.650851 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd2c4c39_fb58_4561_9bc6_1a18dfb7af9c.slice/crio-e4f39b1e83b4b3bfe95feb5d858a95e3f858f5e13a46f48742ddf413cb5ebc20 WatchSource:0}: Error finding container e4f39b1e83b4b3bfe95feb5d858a95e3f858f5e13a46f48742ddf413cb5ebc20: Status 404 returned error can't find the container with id e4f39b1e83b4b3bfe95feb5d858a95e3f858f5e13a46f48742ddf413cb5ebc20 Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.693545 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9g29t"] Dec 06 05:51:07 crc kubenswrapper[4958]: I1206 05:51:07.707815 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.495036 4958 generic.go:334] "Generic (PLEG): container finished" podID="ef244a26-b26b-4f8d-addc-9772f3134412" containerID="abb43ee5837056814c38308ef686d47eee7dc215b358853ea620c3ccfded466d" exitCode=0 Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.495101 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nq6qf" event={"ID":"ef244a26-b26b-4f8d-addc-9772f3134412","Type":"ContainerDied","Data":"abb43ee5837056814c38308ef686d47eee7dc215b358853ea620c3ccfded466d"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.495353 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nq6qf" event={"ID":"ef244a26-b26b-4f8d-addc-9772f3134412","Type":"ContainerStarted","Data":"d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.498156 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9g29t" event={"ID":"31905f86-2a88-4c2e-bf22-a629710e3f6b","Type":"ContainerStarted","Data":"1d326bcae1cf77512a5aeb37a3c0158e1f8d77485d1b4c0baa5f3ca3375da698"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.499695 4958 generic.go:334] "Generic (PLEG): container finished" podID="0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" containerID="720acfc8796ca62ce00f83e6d37f0d00de571dd93c85c06b8a9eadac40dd9efe" exitCode=0 Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.499973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9aa9-account-create-update-pvnth" event={"ID":"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12","Type":"ContainerDied","Data":"720acfc8796ca62ce00f83e6d37f0d00de571dd93c85c06b8a9eadac40dd9efe"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.500009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9aa9-account-create-update-pvnth" event={"ID":"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12","Type":"ContainerStarted","Data":"c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.501729 4958 generic.go:334] "Generic (PLEG): container finished" podID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerID="91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f" exitCode=0 Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.501792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" event={"ID":"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c","Type":"ContainerDied","Data":"91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.501823 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" event={"ID":"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c","Type":"ContainerStarted","Data":"e4f39b1e83b4b3bfe95feb5d858a95e3f858f5e13a46f48742ddf413cb5ebc20"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.507901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c","Type":"ContainerStarted","Data":"546b4deca5fb90e66ccf464b2028bbc29d60325de31ba1aa443875d6e506557f"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.507943 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"16a5cde7-0ad2-4f04-9643-d6ceca21fe3c","Type":"ContainerStarted","Data":"f0af309982689db23c9d642985fc40aa88a1c3cd68114b4990924e5ae171741d"} Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.597831 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.205419111 podStartE2EDuration="7.597807343s" podCreationTimestamp="2025-12-06 05:51:01 +0000 UTC" firstStartedPulling="2025-12-06 05:51:02.134330531 +0000 UTC m=+1372.668101284" lastFinishedPulling="2025-12-06 05:51:07.526718753 +0000 UTC m=+1378.060489516" observedRunningTime="2025-12-06 05:51:08.586689855 +0000 UTC m=+1379.120460628" watchObservedRunningTime="2025-12-06 05:51:08.597807343 +0000 UTC m=+1379.131578106" Dec 06 05:51:08 crc kubenswrapper[4958]: I1206 05:51:08.996288 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.096750 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts\") pod \"5eddd3fa-04bf-46c5-984a-c5227d951195\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.096809 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-plk5p\" (UniqueName: \"kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p\") pod \"5eddd3fa-04bf-46c5-984a-c5227d951195\" (UID: \"5eddd3fa-04bf-46c5-984a-c5227d951195\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.097273 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5eddd3fa-04bf-46c5-984a-c5227d951195" (UID: "5eddd3fa-04bf-46c5-984a-c5227d951195"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.103024 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p" (OuterVolumeSpecName: "kube-api-access-plk5p") pod "5eddd3fa-04bf-46c5-984a-c5227d951195" (UID: "5eddd3fa-04bf-46c5-984a-c5227d951195"). InnerVolumeSpecName "kube-api-access-plk5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.198679 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5eddd3fa-04bf-46c5-984a-c5227d951195-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.198727 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-plk5p\" (UniqueName: \"kubernetes.io/projected/5eddd3fa-04bf-46c5-984a-c5227d951195-kube-api-access-plk5p\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.244786 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.253544 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gl9kl" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.313056 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mdznl" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.401763 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts\") pod \"88549045-eefd-497a-b779-8689cec8daa9\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.401911 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptdj8\" (UniqueName: \"kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8\") pod \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.402784 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts\") pod \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.402823 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tnsvl\" (UniqueName: \"kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl\") pod \"88549045-eefd-497a-b779-8689cec8daa9\" (UID: \"88549045-eefd-497a-b779-8689cec8daa9\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.402863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv6p2\" (UniqueName: \"kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2\") pod \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\" (UID: \"674bc2ce-08e0-49b2-850c-2a00e8e38faa\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.402887 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts\") pod \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\" (UID: \"a573deaf-5c61-4ca3-97a6-c29d9ea40c29\") " Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.403322 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88549045-eefd-497a-b779-8689cec8daa9" (UID: "88549045-eefd-497a-b779-8689cec8daa9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.403494 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88549045-eefd-497a-b779-8689cec8daa9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.403507 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "674bc2ce-08e0-49b2-850c-2a00e8e38faa" (UID: "674bc2ce-08e0-49b2-850c-2a00e8e38faa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.403753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a573deaf-5c61-4ca3-97a6-c29d9ea40c29" (UID: "a573deaf-5c61-4ca3-97a6-c29d9ea40c29"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.409520 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2" (OuterVolumeSpecName: "kube-api-access-nv6p2") pod "674bc2ce-08e0-49b2-850c-2a00e8e38faa" (UID: "674bc2ce-08e0-49b2-850c-2a00e8e38faa"). InnerVolumeSpecName "kube-api-access-nv6p2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.409618 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl" (OuterVolumeSpecName: "kube-api-access-tnsvl") pod "88549045-eefd-497a-b779-8689cec8daa9" (UID: "88549045-eefd-497a-b779-8689cec8daa9"). InnerVolumeSpecName "kube-api-access-tnsvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.420603 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8" (OuterVolumeSpecName: "kube-api-access-ptdj8") pod "a573deaf-5c61-4ca3-97a6-c29d9ea40c29" (UID: "a573deaf-5c61-4ca3-97a6-c29d9ea40c29"). InnerVolumeSpecName "kube-api-access-ptdj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.505066 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptdj8\" (UniqueName: \"kubernetes.io/projected/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-kube-api-access-ptdj8\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.505097 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/674bc2ce-08e0-49b2-850c-2a00e8e38faa-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.505109 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tnsvl\" (UniqueName: \"kubernetes.io/projected/88549045-eefd-497a-b779-8689cec8daa9-kube-api-access-tnsvl\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.505119 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv6p2\" (UniqueName: \"kubernetes.io/projected/674bc2ce-08e0-49b2-850c-2a00e8e38faa-kube-api-access-nv6p2\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.505128 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a573deaf-5c61-4ca3-97a6-c29d9ea40c29-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.518901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" event={"ID":"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c","Type":"ContainerStarted","Data":"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.519054 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.521097 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-mdznl" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.521085 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-mdznl" event={"ID":"88549045-eefd-497a-b779-8689cec8daa9","Type":"ContainerDied","Data":"0678de909dc697a20d7e69f09b7fef1504f954b76aebc6c10c43742398ebc73a"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.521229 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0678de909dc697a20d7e69f09b7fef1504f954b76aebc6c10c43742398ebc73a" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.526815 4958 generic.go:334] "Generic (PLEG): container finished" podID="74d63159-9580-4b70-ba89-74d4d9eeb7b8" containerID="e293fe4c558ae3d2c625d5386e44fa18ab0d16667732f5493c5d33f29013b20a" exitCode=0 Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.526857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"74d63159-9580-4b70-ba89-74d4d9eeb7b8","Type":"ContainerDied","Data":"e293fe4c558ae3d2c625d5386e44fa18ab0d16667732f5493c5d33f29013b20a"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.530511 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-65dd-account-create-update-qtz4w" event={"ID":"674bc2ce-08e0-49b2-850c-2a00e8e38faa","Type":"ContainerDied","Data":"a0b8e68164c7816d71c8be39fa06fd2d54aedd9e5737a67a3391e2bff8f8263a"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.530543 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-65dd-account-create-update-qtz4w" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.530562 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0b8e68164c7816d71c8be39fa06fd2d54aedd9e5737a67a3391e2bff8f8263a" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.532285 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-gl9kl" event={"ID":"a573deaf-5c61-4ca3-97a6-c29d9ea40c29","Type":"ContainerDied","Data":"2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.532351 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2aa8da0efd9878b360d7c27cb0672b97b5289dd2b067720887bc7520eb4ca589" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.532502 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-gl9kl" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.546492 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" podStartSLOduration=8.546460146 podStartE2EDuration="8.546460146s" podCreationTimestamp="2025-12-06 05:51:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:09.542245773 +0000 UTC m=+1380.076016536" watchObservedRunningTime="2025-12-06 05:51:09.546460146 +0000 UTC m=+1380.080230909" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.554848 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-4069-account-create-update-b92cg" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.561833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-4069-account-create-update-b92cg" event={"ID":"5eddd3fa-04bf-46c5-984a-c5227d951195","Type":"ContainerDied","Data":"f4603bbde3431e59684854fc06ebc9268cf9367ab0213e2f595f1165aade452e"} Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.561868 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f4603bbde3431e59684854fc06ebc9268cf9367ab0213e2f595f1165aade452e" Dec 06 05:51:09 crc kubenswrapper[4958]: I1206 05:51:09.561884 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 06 05:51:10 crc kubenswrapper[4958]: I1206 05:51:10.565016 4958 generic.go:334] "Generic (PLEG): container finished" podID="3141e77c-a73b-400b-b607-21be8537cca4" containerID="5447173ef5fee4c34e757e67557ea57f1af67957b37f37d702d14e1373aa852f" exitCode=0 Dec 06 05:51:10 crc kubenswrapper[4958]: I1206 05:51:10.566125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerDied","Data":"5447173ef5fee4c34e757e67557ea57f1af67957b37f37d702d14e1373aa852f"} Dec 06 05:51:10 crc kubenswrapper[4958]: I1206 05:51:10.852598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:10 crc kubenswrapper[4958]: E1206 05:51:10.853106 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:10 crc kubenswrapper[4958]: E1206 05:51:10.853241 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:10 crc kubenswrapper[4958]: E1206 05:51:10.853457 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:18.853427309 +0000 UTC m=+1389.387198102 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:11 crc kubenswrapper[4958]: I1206 05:51:11.575289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerStarted","Data":"4e04d1dda7193b4f1ee2aabc0bf5934ec47da52e69d17c67a8c71a39edf7e20d"} Dec 06 05:51:11 crc kubenswrapper[4958]: I1206 05:51:11.962847 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:11 crc kubenswrapper[4958]: I1206 05:51:11.970129 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.079960 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crtr7\" (UniqueName: \"kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7\") pod \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.080371 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts\") pod \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\" (UID: \"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12\") " Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.080443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts\") pod \"ef244a26-b26b-4f8d-addc-9772f3134412\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.080516 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smjrc\" (UniqueName: \"kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc\") pod \"ef244a26-b26b-4f8d-addc-9772f3134412\" (UID: \"ef244a26-b26b-4f8d-addc-9772f3134412\") " Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.081698 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" (UID: "0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.081761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ef244a26-b26b-4f8d-addc-9772f3134412" (UID: "ef244a26-b26b-4f8d-addc-9772f3134412"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.085700 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7" (OuterVolumeSpecName: "kube-api-access-crtr7") pod "0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" (UID: "0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12"). InnerVolumeSpecName "kube-api-access-crtr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.086197 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc" (OuterVolumeSpecName: "kube-api-access-smjrc") pod "ef244a26-b26b-4f8d-addc-9772f3134412" (UID: "ef244a26-b26b-4f8d-addc-9772f3134412"). InnerVolumeSpecName "kube-api-access-smjrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.182081 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ef244a26-b26b-4f8d-addc-9772f3134412-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.182262 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smjrc\" (UniqueName: \"kubernetes.io/projected/ef244a26-b26b-4f8d-addc-9772f3134412-kube-api-access-smjrc\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.182342 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crtr7\" (UniqueName: \"kubernetes.io/projected/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-kube-api-access-crtr7\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.182433 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.584130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerStarted","Data":"eed58feb9444d5d335af8a921fb42eb856339014a384f9d00fe57a71633b9423"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.584384 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.587064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-9aa9-account-create-update-pvnth" event={"ID":"0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12","Type":"ContainerDied","Data":"c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.587097 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c72e746d2460626ab95678b803ea757d811aa5a9890d25ec3648ebf63ae54657" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.587069 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-9aa9-account-create-update-pvnth" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.588722 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-create-nq6qf" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.588769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-create-nq6qf" event={"ID":"ef244a26-b26b-4f8d-addc-9772f3134412","Type":"ContainerDied","Data":"d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.588808 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d90300fd805bb3c52fea69cf37108b1b2fb2aa186efc36da312446ce60530207" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.590781 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9g29t" event={"ID":"31905f86-2a88-4c2e-bf22-a629710e3f6b","Type":"ContainerStarted","Data":"04373fa5fa1af5200100ae90889afa005d85e798ee0b92c65688aa2496ed8665"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.592875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-notifications-server-0" event={"ID":"74d63159-9580-4b70-ba89-74d4d9eeb7b8","Type":"ContainerStarted","Data":"14e60d75df8a4a4ae96bfac830b86fc44a2f71e6e68c463daf62130f7727565d"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.593059 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.595759 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerStarted","Data":"39bacaa0a59e9ab7af6b7d36f6a73a2b4b83a49e2d4d32eea14fbe77753677cd"} Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.629840 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.236154077 podStartE2EDuration="1m48.6298196s" podCreationTimestamp="2025-12-06 05:49:24 +0000 UTC" firstStartedPulling="2025-12-06 05:49:26.236854609 +0000 UTC m=+1276.770625372" lastFinishedPulling="2025-12-06 05:50:35.630520132 +0000 UTC m=+1346.164290895" observedRunningTime="2025-12-06 05:51:12.622971356 +0000 UTC m=+1383.156742109" watchObservedRunningTime="2025-12-06 05:51:12.6298196 +0000 UTC m=+1383.163590373" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.644254 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9g29t" podStartSLOduration=5.275839951 podStartE2EDuration="9.644237067s" podCreationTimestamp="2025-12-06 05:51:03 +0000 UTC" firstStartedPulling="2025-12-06 05:51:07.654095872 +0000 UTC m=+1378.187866625" lastFinishedPulling="2025-12-06 05:51:12.022492978 +0000 UTC m=+1382.556263741" observedRunningTime="2025-12-06 05:51:12.643433375 +0000 UTC m=+1383.177204148" watchObservedRunningTime="2025-12-06 05:51:12.644237067 +0000 UTC m=+1383.178007830" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.723907 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-notifications-server-0" podStartSLOduration=39.770735967 podStartE2EDuration="1m48.723883094s" podCreationTimestamp="2025-12-06 05:49:24 +0000 UTC" firstStartedPulling="2025-12-06 05:49:26.68984278 +0000 UTC m=+1277.223613543" lastFinishedPulling="2025-12-06 05:50:35.642989907 +0000 UTC m=+1346.176760670" observedRunningTime="2025-12-06 05:51:12.684571099 +0000 UTC m=+1383.218341852" watchObservedRunningTime="2025-12-06 05:51:12.723883094 +0000 UTC m=+1383.257653857" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.724720 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.724711927 podStartE2EDuration="20.724711927s" podCreationTimestamp="2025-12-06 05:50:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:12.712030317 +0000 UTC m=+1383.245801080" watchObservedRunningTime="2025-12-06 05:51:12.724711927 +0000 UTC m=+1383.258482690" Dec 06 05:51:12 crc kubenswrapper[4958]: I1206 05:51:12.750706 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 05:51:16 crc kubenswrapper[4958]: I1206 05:51:16.627170 4958 generic.go:334] "Generic (PLEG): container finished" podID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerID="76adae9e6bc33af24601db302a5fbb54e7eca5cc92840c07b0d0761eed07ff77" exitCode=0 Dec 06 05:51:16 crc kubenswrapper[4958]: I1206 05:51:16.627308 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerDied","Data":"76adae9e6bc33af24601db302a5fbb54e7eca5cc92840c07b0d0761eed07ff77"} Dec 06 05:51:16 crc kubenswrapper[4958]: I1206 05:51:16.944228 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:51:16 crc kubenswrapper[4958]: I1206 05:51:16.947046 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-glklh" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.181441 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-rsngm-config-j662d"] Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182078 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eddd3fa-04bf-46c5-984a-c5227d951195" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182176 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eddd3fa-04bf-46c5-984a-c5227d951195" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182258 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182332 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182412 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88549045-eefd-497a-b779-8689cec8daa9" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182504 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="88549045-eefd-497a-b779-8689cec8daa9" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182606 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef244a26-b26b-4f8d-addc-9772f3134412" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182672 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef244a26-b26b-4f8d-addc-9772f3134412" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182764 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="674bc2ce-08e0-49b2-850c-2a00e8e38faa" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182823 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="674bc2ce-08e0-49b2-850c-2a00e8e38faa" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: E1206 05:51:17.182897 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a573deaf-5c61-4ca3-97a6-c29d9ea40c29" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.182961 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a573deaf-5c61-4ca3-97a6-c29d9ea40c29" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183301 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eddd3fa-04bf-46c5-984a-c5227d951195" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183391 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="88549045-eefd-497a-b779-8689cec8daa9" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183487 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef244a26-b26b-4f8d-addc-9772f3134412" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183572 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183648 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="674bc2ce-08e0-49b2-850c-2a00e8e38faa" containerName="mariadb-account-create-update" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.183718 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a573deaf-5c61-4ca3-97a6-c29d9ea40c29" containerName="mariadb-database-create" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.184595 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.187979 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.193459 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rsngm-config-j662d"] Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.263867 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.269451 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.269508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.269661 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.269851 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdkz9\" (UniqueName: \"kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.269904 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.270113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.360780 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.361054 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="dnsmasq-dns" containerID="cri-o://8953855222bc97c678673cf2311b4fe6056f31482a095b9c45773cf3d5019b74" gracePeriod=10 Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdkz9\" (UniqueName: \"kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373603 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373733 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.373900 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.374288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.374366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.374618 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.375223 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.377288 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.397156 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdkz9\" (UniqueName: \"kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9\") pod \"ovn-controller-rsngm-config-j662d\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.504922 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.651319 4958 generic.go:334] "Generic (PLEG): container finished" podID="50541cab-c7c9-4995-af23-be9f98383190" containerID="8953855222bc97c678673cf2311b4fe6056f31482a095b9c45773cf3d5019b74" exitCode=0 Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.651416 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" event={"ID":"50541cab-c7c9-4995-af23-be9f98383190","Type":"ContainerDied","Data":"8953855222bc97c678673cf2311b4fe6056f31482a095b9c45773cf3d5019b74"} Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.654235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerStarted","Data":"02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39"} Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.654930 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 05:51:17 crc kubenswrapper[4958]: I1206 05:51:17.680580 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371923.174217 podStartE2EDuration="1m53.680558342s" podCreationTimestamp="2025-12-06 05:49:24 +0000 UTC" firstStartedPulling="2025-12-06 05:49:26.966641026 +0000 UTC m=+1277.500411789" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:17.678688181 +0000 UTC m=+1388.212458934" watchObservedRunningTime="2025-12-06 05:51:17.680558342 +0000 UTC m=+1388.214329105" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.054620 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.087109 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config\") pod \"50541cab-c7c9-4995-af23-be9f98383190\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.087183 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64l8j\" (UniqueName: \"kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j\") pod \"50541cab-c7c9-4995-af23-be9f98383190\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.087211 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb\") pod \"50541cab-c7c9-4995-af23-be9f98383190\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.087280 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc\") pod \"50541cab-c7c9-4995-af23-be9f98383190\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.087421 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb\") pod \"50541cab-c7c9-4995-af23-be9f98383190\" (UID: \"50541cab-c7c9-4995-af23-be9f98383190\") " Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.096414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j" (OuterVolumeSpecName: "kube-api-access-64l8j") pod "50541cab-c7c9-4995-af23-be9f98383190" (UID: "50541cab-c7c9-4995-af23-be9f98383190"). InnerVolumeSpecName "kube-api-access-64l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.151600 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-rsngm-config-j662d"] Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.158687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "50541cab-c7c9-4995-af23-be9f98383190" (UID: "50541cab-c7c9-4995-af23-be9f98383190"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.164947 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config" (OuterVolumeSpecName: "config") pod "50541cab-c7c9-4995-af23-be9f98383190" (UID: "50541cab-c7c9-4995-af23-be9f98383190"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.189749 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.189787 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64l8j\" (UniqueName: \"kubernetes.io/projected/50541cab-c7c9-4995-af23-be9f98383190-kube-api-access-64l8j\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.189800 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.198103 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "50541cab-c7c9-4995-af23-be9f98383190" (UID: "50541cab-c7c9-4995-af23-be9f98383190"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.213374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "50541cab-c7c9-4995-af23-be9f98383190" (UID: "50541cab-c7c9-4995-af23-be9f98383190"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.291041 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.291364 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/50541cab-c7c9-4995-af23-be9f98383190-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.663625 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm-config-j662d" event={"ID":"56c9430f-4714-4505-aab1-0b650c1cfe40","Type":"ContainerStarted","Data":"f3f38f3120442d10c02b6ab44ee91bceac0d9d018ef6677cf0840a7ec37954dd"} Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.663676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm-config-j662d" event={"ID":"56c9430f-4714-4505-aab1-0b650c1cfe40","Type":"ContainerStarted","Data":"e0b77cbfede8fa73cc67b8d2835cf06961c04d3e0af2c237150ba0f8e980a0fc"} Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.668617 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.668707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6dbf544cc9-q6cs8" event={"ID":"50541cab-c7c9-4995-af23-be9f98383190","Type":"ContainerDied","Data":"96cf3b9c9edfa9682c265d79c169772561fe997419cca3e5e44098ce134756c3"} Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.668785 4958 scope.go:117] "RemoveContainer" containerID="8953855222bc97c678673cf2311b4fe6056f31482a095b9c45773cf3d5019b74" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.693840 4958 scope.go:117] "RemoveContainer" containerID="b6d8411f568e9eb9be7d38836a6e46a8c3da66c43bab607ecc91490a2fdd5730" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.695179 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-rsngm-config-j662d" podStartSLOduration=1.695162865 podStartE2EDuration="1.695162865s" podCreationTimestamp="2025-12-06 05:51:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:18.690774237 +0000 UTC m=+1389.224545000" watchObservedRunningTime="2025-12-06 05:51:18.695162865 +0000 UTC m=+1389.228933638" Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.736955 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.752713 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6dbf544cc9-q6cs8"] Dec 06 05:51:18 crc kubenswrapper[4958]: I1206 05:51:18.901280 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:18 crc kubenswrapper[4958]: E1206 05:51:18.901442 4958 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Dec 06 05:51:18 crc kubenswrapper[4958]: E1206 05:51:18.901456 4958 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Dec 06 05:51:18 crc kubenswrapper[4958]: E1206 05:51:18.901512 4958 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift podName:d8c3d892-a529-436f-b8f1-3bb2a4ffbed2 nodeName:}" failed. No retries permitted until 2025-12-06 05:51:34.901497184 +0000 UTC m=+1405.435267947 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift") pod "swift-storage-0" (UID: "d8c3d892-a529-436f-b8f1-3bb2a4ffbed2") : configmap "swift-ring-files" not found Dec 06 05:51:19 crc kubenswrapper[4958]: I1206 05:51:19.677186 4958 generic.go:334] "Generic (PLEG): container finished" podID="56c9430f-4714-4505-aab1-0b650c1cfe40" containerID="f3f38f3120442d10c02b6ab44ee91bceac0d9d018ef6677cf0840a7ec37954dd" exitCode=0 Dec 06 05:51:19 crc kubenswrapper[4958]: I1206 05:51:19.677595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm-config-j662d" event={"ID":"56c9430f-4714-4505-aab1-0b650c1cfe40","Type":"ContainerDied","Data":"f3f38f3120442d10c02b6ab44ee91bceac0d9d018ef6677cf0840a7ec37954dd"} Dec 06 05:51:19 crc kubenswrapper[4958]: I1206 05:51:19.772382 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50541cab-c7c9-4995-af23-be9f98383190" path="/var/lib/kubelet/pods/50541cab-c7c9-4995-af23-be9f98383190/volumes" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.020891 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.035790 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.035881 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.035896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.035982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036123 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bdkz9\" (UniqueName: \"kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036158 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036201 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn\") pod \"56c9430f-4714-4505-aab1-0b650c1cfe40\" (UID: \"56c9430f-4714-4505-aab1-0b650c1cfe40\") " Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036658 4958 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036696 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.036792 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.037558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts" (OuterVolumeSpecName: "scripts") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.037701 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run" (OuterVolumeSpecName: "var-run") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.042215 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9" (OuterVolumeSpecName: "kube-api-access-bdkz9") pod "56c9430f-4714-4505-aab1-0b650c1cfe40" (UID: "56c9430f-4714-4505-aab1-0b650c1cfe40"). InnerVolumeSpecName "kube-api-access-bdkz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.138297 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bdkz9\" (UniqueName: \"kubernetes.io/projected/56c9430f-4714-4505-aab1-0b650c1cfe40-kube-api-access-bdkz9\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.138334 4958 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.138346 4958 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/56c9430f-4714-4505-aab1-0b650c1cfe40-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.138358 4958 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.138372 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/56c9430f-4714-4505-aab1-0b650c1cfe40-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.570536 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.699297 4958 generic.go:334] "Generic (PLEG): container finished" podID="31905f86-2a88-4c2e-bf22-a629710e3f6b" containerID="04373fa5fa1af5200100ae90889afa005d85e798ee0b92c65688aa2496ed8665" exitCode=0 Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.699386 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9g29t" event={"ID":"31905f86-2a88-4c2e-bf22-a629710e3f6b","Type":"ContainerDied","Data":"04373fa5fa1af5200100ae90889afa005d85e798ee0b92c65688aa2496ed8665"} Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.705921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-rsngm-config-j662d" event={"ID":"56c9430f-4714-4505-aab1-0b650c1cfe40","Type":"ContainerDied","Data":"e0b77cbfede8fa73cc67b8d2835cf06961c04d3e0af2c237150ba0f8e980a0fc"} Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.705962 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b77cbfede8fa73cc67b8d2835cf06961c04d3e0af2c237150ba0f8e980a0fc" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.705989 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-rsngm-config-j662d" Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.821039 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-rsngm-config-j662d"] Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.830266 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-rsngm-config-j662d"] Dec 06 05:51:21 crc kubenswrapper[4958]: I1206 05:51:21.882443 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-rsngm" Dec 06 05:51:22 crc kubenswrapper[4958]: I1206 05:51:22.750029 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 05:51:22 crc kubenswrapper[4958]: I1206 05:51:22.756762 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.090525 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271558 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271651 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271719 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271752 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wltsk\" (UniqueName: \"kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271826 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.271845 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf\") pod \"31905f86-2a88-4c2e-bf22-a629710e3f6b\" (UID: \"31905f86-2a88-4c2e-bf22-a629710e3f6b\") " Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.272310 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.272761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.277512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk" (OuterVolumeSpecName: "kube-api-access-wltsk") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "kube-api-access-wltsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.295817 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.296657 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts" (OuterVolumeSpecName: "scripts") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.299281 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.301616 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "31905f86-2a88-4c2e-bf22-a629710e3f6b" (UID: "31905f86-2a88-4c2e-bf22-a629710e3f6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374069 4958 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-swiftconf\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374113 4958 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-dispersionconf\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374124 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/31905f86-2a88-4c2e-bf22-a629710e3f6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374134 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374142 4958 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/31905f86-2a88-4c2e-bf22-a629710e3f6b-ring-data-devices\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374151 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wltsk\" (UniqueName: \"kubernetes.io/projected/31905f86-2a88-4c2e-bf22-a629710e3f6b-kube-api-access-wltsk\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.374160 4958 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/31905f86-2a88-4c2e-bf22-a629710e3f6b-etc-swift\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.724707 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9g29t" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.725105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9g29t" event={"ID":"31905f86-2a88-4c2e-bf22-a629710e3f6b","Type":"ContainerDied","Data":"1d326bcae1cf77512a5aeb37a3c0158e1f8d77485d1b4c0baa5f3ca3375da698"} Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.725164 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d326bcae1cf77512a5aeb37a3c0158e1f8d77485d1b4c0baa5f3ca3375da698" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.730388 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 05:51:23 crc kubenswrapper[4958]: I1206 05:51:23.770246 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56c9430f-4714-4505-aab1-0b650c1cfe40" path="/var/lib/kubelet/pods/56c9430f-4714-4505-aab1-0b650c1cfe40/volumes" Dec 06 05:51:25 crc kubenswrapper[4958]: I1206 05:51:25.753656 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.106:5671: connect: connection refused" Dec 06 05:51:26 crc kubenswrapper[4958]: I1206 05:51:26.062315 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-notifications-server-0" podUID="74d63159-9580-4b70-ba89-74d4d9eeb7b8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.107:5671: connect: connection refused" Dec 06 05:51:26 crc kubenswrapper[4958]: I1206 05:51:26.336754 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.108:5671: connect: connection refused" Dec 06 05:51:34 crc kubenswrapper[4958]: I1206 05:51:34.966360 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:34 crc kubenswrapper[4958]: I1206 05:51:34.981483 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/d8c3d892-a529-436f-b8f1-3bb2a4ffbed2-etc-swift\") pod \"swift-storage-0\" (UID: \"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2\") " pod="openstack/swift-storage-0" Dec 06 05:51:35 crc kubenswrapper[4958]: I1206 05:51:35.132774 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Dec 06 05:51:35 crc kubenswrapper[4958]: I1206 05:51:35.754818 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:51:35 crc kubenswrapper[4958]: I1206 05:51:35.949685 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Dec 06 05:51:35 crc kubenswrapper[4958]: W1206 05:51:35.952359 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8c3d892_a529_436f_b8f1_3bb2a4ffbed2.slice/crio-a68ff6fc7e1919303c872a63f3c6f540bb72901682ab2d92c2e1a5fef02a41c6 WatchSource:0}: Error finding container a68ff6fc7e1919303c872a63f3c6f540bb72901682ab2d92c2e1a5fef02a41c6: Status 404 returned error can't find the container with id a68ff6fc7e1919303c872a63f3c6f540bb72901682ab2d92c2e1a5fef02a41c6 Dec 06 05:51:36 crc kubenswrapper[4958]: I1206 05:51:36.062649 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-notifications-server-0" Dec 06 05:51:36 crc kubenswrapper[4958]: I1206 05:51:36.335651 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 05:51:36 crc kubenswrapper[4958]: I1206 05:51:36.840268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"a68ff6fc7e1919303c872a63f3c6f540bb72901682ab2d92c2e1a5fef02a41c6"} Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192130 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jdshg"] Dec 06 05:51:37 crc kubenswrapper[4958]: E1206 05:51:37.192446 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="init" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192458 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="init" Dec 06 05:51:37 crc kubenswrapper[4958]: E1206 05:51:37.192492 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56c9430f-4714-4505-aab1-0b650c1cfe40" containerName="ovn-config" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192499 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56c9430f-4714-4505-aab1-0b650c1cfe40" containerName="ovn-config" Dec 06 05:51:37 crc kubenswrapper[4958]: E1206 05:51:37.192517 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31905f86-2a88-4c2e-bf22-a629710e3f6b" containerName="swift-ring-rebalance" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192522 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="31905f86-2a88-4c2e-bf22-a629710e3f6b" containerName="swift-ring-rebalance" Dec 06 05:51:37 crc kubenswrapper[4958]: E1206 05:51:37.192541 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="dnsmasq-dns" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192547 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="dnsmasq-dns" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192704 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56c9430f-4714-4505-aab1-0b650c1cfe40" containerName="ovn-config" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192723 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="31905f86-2a88-4c2e-bf22-a629710e3f6b" containerName="swift-ring-rebalance" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.192730 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="50541cab-c7c9-4995-af23-be9f98383190" containerName="dnsmasq-dns" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.193237 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.210914 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jdshg"] Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.293901 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-b0af-account-create-update-2sf4j"] Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.295238 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.296906 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.304133 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lwdb\" (UniqueName: \"kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.304408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.304521 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b0af-account-create-update-2sf4j"] Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.405709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.405764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.405831 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lwdb\" (UniqueName: \"kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.405877 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnmt8\" (UniqueName: \"kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.410705 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.431373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lwdb\" (UniqueName: \"kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb\") pod \"glance-db-create-jdshg\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.512510 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnmt8\" (UniqueName: \"kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.512618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.513500 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.514014 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jdshg" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.545806 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnmt8\" (UniqueName: \"kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8\") pod \"glance-b0af-account-create-update-2sf4j\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.614328 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.870028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"cae5137e8e47a4b8ae826cc3449e96e2ad371d58c74280b644d63feab274e438"} Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.870356 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"fae4cfc924d7b9a8f358d562d3790472600879a69ebe20dea015010965e885e8"} Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.870368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"a99703df1b80366b956c3b8df64427de496beecda3fbf4f190c0c6d2246a4198"} Dec 06 05:51:37 crc kubenswrapper[4958]: I1206 05:51:37.870376 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"bb7bc3f52484e26d029708f7d283440f538e413cbdb2be32717f9b3eda73ed68"} Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.108422 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-b0af-account-create-update-2sf4j"] Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.135884 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jdshg"] Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.882223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jdshg" event={"ID":"2f779858-5c94-4cbe-a940-d81de4d26b69","Type":"ContainerStarted","Data":"649c0312e39db5ce1296558b3f583c1b63a9cf92dce3148cbcb0c7c186b9dbfc"} Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.882266 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jdshg" event={"ID":"2f779858-5c94-4cbe-a940-d81de4d26b69","Type":"ContainerStarted","Data":"63819ba5b93f258c932398af6a950b549f885b4cb5dae7d36c064012d334864d"} Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.889202 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b0af-account-create-update-2sf4j" event={"ID":"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d","Type":"ContainerStarted","Data":"74bfa5e542e1f9319b59381ca932162675ff4c6feb60218cb029601cc8061269"} Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.889251 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b0af-account-create-update-2sf4j" event={"ID":"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d","Type":"ContainerStarted","Data":"f1acdf05fec2260f30db15dd6216bd1429ce8d5556d4144e68fb7f462a882e0c"} Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.910095 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-jdshg" podStartSLOduration=1.910075685 podStartE2EDuration="1.910075685s" podCreationTimestamp="2025-12-06 05:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:38.905827921 +0000 UTC m=+1409.439598684" watchObservedRunningTime="2025-12-06 05:51:38.910075685 +0000 UTC m=+1409.443846448" Dec 06 05:51:38 crc kubenswrapper[4958]: I1206 05:51:38.929557 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-b0af-account-create-update-2sf4j" podStartSLOduration=1.929534827 podStartE2EDuration="1.929534827s" podCreationTimestamp="2025-12-06 05:51:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:38.922744775 +0000 UTC m=+1409.456515538" watchObservedRunningTime="2025-12-06 05:51:38.929534827 +0000 UTC m=+1409.463305590" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.062368 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-52pk5"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.063441 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.075889 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-52pk5"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.143212 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.143290 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c94rm\" (UniqueName: \"kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.168848 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-qzlg2"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.179730 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.183040 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-97fb-account-create-update-htb6t"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.184406 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.187159 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.200400 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qzlg2"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.238069 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-97fb-account-create-update-htb6t"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.244739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c94rm\" (UniqueName: \"kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.244823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.244866 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhlm\" (UniqueName: \"kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.244909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.244939 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xs8md\" (UniqueName: \"kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.245004 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.245736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.310441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c94rm\" (UniqueName: \"kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm\") pod \"barbican-db-create-52pk5\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.329542 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7910-account-create-update-ng4vj"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.331074 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.332600 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7910-account-create-update-ng4vj"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.336165 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.347067 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.347124 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhlm\" (UniqueName: \"kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.347152 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.347175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xs8md\" (UniqueName: \"kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.348065 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.348701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.355208 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-db-sync-7mfjj"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.356293 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.364376 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-config-data" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.364589 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-2dxx6" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.366598 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-7mfjj"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.409093 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhlm\" (UniqueName: \"kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm\") pod \"barbican-97fb-account-create-update-htb6t\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.409996 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xs8md\" (UniqueName: \"kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md\") pod \"cinder-db-create-qzlg2\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449435 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7x4s\" (UniqueName: \"kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449504 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449535 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqk4w\" (UniqueName: \"kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.449614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.478765 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.482904 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-l4znm"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.483983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.493669 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbqwc" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.493995 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.495570 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.502161 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.503048 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.503317 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l4znm"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.528019 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.559670 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgdgw\" (UniqueName: \"kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.559764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqk4w\" (UniqueName: \"kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.559915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.559987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.560012 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.560100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.560127 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.565318 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7x4s\" (UniqueName: \"kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.565436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.573128 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.579397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.583004 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.587661 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.603492 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqk4w\" (UniqueName: \"kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w\") pod \"cinder-7910-account-create-update-ng4vj\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.604197 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7x4s\" (UniqueName: \"kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s\") pod \"watcher-db-sync-7mfjj\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.656721 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-g92gz"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.657980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.671532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.671596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.671667 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgdgw\" (UniqueName: \"kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.678090 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.679264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.682841 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g92gz"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.706008 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.715085 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgdgw\" (UniqueName: \"kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw\") pod \"keystone-db-sync-l4znm\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.725290 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-3343-account-create-update-kng7l"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.726852 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.730343 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.774484 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jdnf\" (UniqueName: \"kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.774558 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.774608 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.774699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8h2q\" (UniqueName: \"kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.785050 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3343-account-create-update-kng7l"] Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.804621 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.879639 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l4znm" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.880835 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jdnf\" (UniqueName: \"kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.881073 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.881211 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.881396 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8h2q\" (UniqueName: \"kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.882672 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.882729 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.911491 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8h2q\" (UniqueName: \"kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q\") pod \"neutron-3343-account-create-update-kng7l\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.914315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jdnf\" (UniqueName: \"kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf\") pod \"neutron-db-create-g92gz\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.955080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"116d46592fd939ec0c167bc3697ff4ed74547f3e0c4ee8d6e2408f1a843d55a5"} Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.971583 4958 generic.go:334] "Generic (PLEG): container finished" podID="2f779858-5c94-4cbe-a940-d81de4d26b69" containerID="649c0312e39db5ce1296558b3f583c1b63a9cf92dce3148cbcb0c7c186b9dbfc" exitCode=0 Dec 06 05:51:39 crc kubenswrapper[4958]: I1206 05:51:39.972086 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jdshg" event={"ID":"2f779858-5c94-4cbe-a940-d81de4d26b69","Type":"ContainerDied","Data":"649c0312e39db5ce1296558b3f583c1b63a9cf92dce3148cbcb0c7c186b9dbfc"} Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.009819 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.028536 4958 generic.go:334] "Generic (PLEG): container finished" podID="c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" containerID="74bfa5e542e1f9319b59381ca932162675ff4c6feb60218cb029601cc8061269" exitCode=0 Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.028598 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b0af-account-create-update-2sf4j" event={"ID":"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d","Type":"ContainerDied","Data":"74bfa5e542e1f9319b59381ca932162675ff4c6feb60218cb029601cc8061269"} Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.050685 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.116523 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-qzlg2"] Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.172037 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-52pk5"] Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.245852 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-97fb-account-create-update-htb6t"] Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.407510 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7910-account-create-update-ng4vj"] Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.639146 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-db-sync-7mfjj"] Dec 06 05:51:40 crc kubenswrapper[4958]: W1206 05:51:40.740210 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cf87c8_3462_476a_b396_26a24e954229.slice/crio-f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0 WatchSource:0}: Error finding container f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0: Status 404 returned error can't find the container with id f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0 Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.872425 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-l4znm"] Dec 06 05:51:40 crc kubenswrapper[4958]: I1206 05:51:40.996141 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-3343-account-create-update-kng7l"] Dec 06 05:51:41 crc kubenswrapper[4958]: W1206 05:51:41.000002 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c9f41cd_4696_4b14_a48d_b202f0d6796b.slice/crio-65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88 WatchSource:0}: Error finding container 65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88: Status 404 returned error can't find the container with id 65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88 Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.037144 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7mfjj" event={"ID":"35cf87c8-3462-476a-b396-26a24e954229","Type":"ContainerStarted","Data":"f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.038230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l4znm" event={"ID":"7b244aca-d463-42f1-b8f9-d96dca44f635","Type":"ContainerStarted","Data":"79947882f96590b3a4ccccac8710ee3370f67445afc0fe9db3b0602967d9263b"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.039399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52pk5" event={"ID":"94f50bb5-d668-42ce-b9da-9364fcf27a33","Type":"ContainerStarted","Data":"af697e57da38481791a50b81aa8008397b844a39b6b0210318ab895ede3b05fb"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.040556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-97fb-account-create-update-htb6t" event={"ID":"437479ce-fa34-40d2-af1c-a611eaaecc20","Type":"ContainerStarted","Data":"16ae239883505be404348484231ee36354184c79047f4b9e85ad892765e1fbe2"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.041598 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7910-account-create-update-ng4vj" event={"ID":"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1","Type":"ContainerStarted","Data":"64cef408f03fb2dec890096feb221cb20068ac1d8d579a4c8b3ea7f85a14b3f9"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.042807 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qzlg2" event={"ID":"261f5bc1-8806-430c-bbca-2142d542071d","Type":"ContainerStarted","Data":"367496ca718cfd816390886ffbb61892cfe933eb9d8fd4a0b77447990dd1acc5"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.044617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3343-account-create-update-kng7l" event={"ID":"5c9f41cd-4696-4b14-a48d-b202f0d6796b","Type":"ContainerStarted","Data":"65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88"} Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.110795 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-g92gz"] Dec 06 05:51:41 crc kubenswrapper[4958]: W1206 05:51:41.145090 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ea1e14_d33a_45cd_bc32_655d29f95017.slice/crio-63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be WatchSource:0}: Error finding container 63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be: Status 404 returned error can't find the container with id 63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.499357 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jdshg" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.520066 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.558275 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts\") pod \"2f779858-5c94-4cbe-a940-d81de4d26b69\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.558341 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnmt8\" (UniqueName: \"kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8\") pod \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.558392 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lwdb\" (UniqueName: \"kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb\") pod \"2f779858-5c94-4cbe-a940-d81de4d26b69\" (UID: \"2f779858-5c94-4cbe-a940-d81de4d26b69\") " Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.558410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts\") pod \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\" (UID: \"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d\") " Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.559584 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" (UID: "c98d83aa-c74d-4bc3-bbbd-4fcb700f964d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.565730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8" (OuterVolumeSpecName: "kube-api-access-hnmt8") pod "c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" (UID: "c98d83aa-c74d-4bc3-bbbd-4fcb700f964d"). InnerVolumeSpecName "kube-api-access-hnmt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.566184 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2f779858-5c94-4cbe-a940-d81de4d26b69" (UID: "2f779858-5c94-4cbe-a940-d81de4d26b69"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.573154 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb" (OuterVolumeSpecName: "kube-api-access-8lwdb") pod "2f779858-5c94-4cbe-a940-d81de4d26b69" (UID: "2f779858-5c94-4cbe-a940-d81de4d26b69"). InnerVolumeSpecName "kube-api-access-8lwdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.666576 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2f779858-5c94-4cbe-a940-d81de4d26b69-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.666605 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnmt8\" (UniqueName: \"kubernetes.io/projected/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-kube-api-access-hnmt8\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.666615 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lwdb\" (UniqueName: \"kubernetes.io/projected/2f779858-5c94-4cbe-a940-d81de4d26b69-kube-api-access-8lwdb\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:41 crc kubenswrapper[4958]: I1206 05:51:41.666624 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.054290 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7910-account-create-update-ng4vj" event={"ID":"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1","Type":"ContainerStarted","Data":"013af68c6551086becee1b006abef6d349e5b62191a009e81d29bd4e3334b796"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.056302 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qzlg2" event={"ID":"261f5bc1-8806-430c-bbca-2142d542071d","Type":"ContainerStarted","Data":"35c3a25e2bf0e6f0d21e00c4854df1ca26f01197af270b66aa3fd10f1e0c8e76"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.060017 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g92gz" event={"ID":"48ea1e14-d33a-45cd-bc32-655d29f95017","Type":"ContainerStarted","Data":"8cadc66ee34e77dd52f36424defacf2c15137c82c2739b6501f014b8bf5edba8"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.060061 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g92gz" event={"ID":"48ea1e14-d33a-45cd-bc32-655d29f95017","Type":"ContainerStarted","Data":"63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.061724 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3343-account-create-update-kng7l" event={"ID":"5c9f41cd-4696-4b14-a48d-b202f0d6796b","Type":"ContainerStarted","Data":"f310b8d3468992f4cd4c60571fca9d866ff7bd03f0c133d7d908bf398ab47d30"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.065303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-97fb-account-create-update-htb6t" event={"ID":"437479ce-fa34-40d2-af1c-a611eaaecc20","Type":"ContainerStarted","Data":"4f03ec094cabc830abc0a6f6b61d702f067f9049b81fee717461cc82a365d74f"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.076893 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52pk5" event={"ID":"94f50bb5-d668-42ce-b9da-9364fcf27a33","Type":"ContainerStarted","Data":"e3ab2abb39669337388050cf9866bbd388d923c83c3e31f0c0be4632e8513b7d"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.079817 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jdshg" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.079902 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jdshg" event={"ID":"2f779858-5c94-4cbe-a940-d81de4d26b69","Type":"ContainerDied","Data":"63819ba5b93f258c932398af6a950b549f885b4cb5dae7d36c064012d334864d"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.079930 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63819ba5b93f258c932398af6a950b549f885b4cb5dae7d36c064012d334864d" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.085226 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-b0af-account-create-update-2sf4j" event={"ID":"c98d83aa-c74d-4bc3-bbbd-4fcb700f964d","Type":"ContainerDied","Data":"f1acdf05fec2260f30db15dd6216bd1429ce8d5556d4144e68fb7f462a882e0c"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.085254 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1acdf05fec2260f30db15dd6216bd1429ce8d5556d4144e68fb7f462a882e0c" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.085305 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-b0af-account-create-update-2sf4j" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.088141 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7910-account-create-update-ng4vj" podStartSLOduration=3.087931325 podStartE2EDuration="3.087931325s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.076154368 +0000 UTC m=+1412.609925131" watchObservedRunningTime="2025-12-06 05:51:42.087931325 +0000 UTC m=+1412.621702088" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.093590 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-qzlg2" podStartSLOduration=3.093576646 podStartE2EDuration="3.093576646s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.090318838 +0000 UTC m=+1412.624089621" watchObservedRunningTime="2025-12-06 05:51:42.093576646 +0000 UTC m=+1412.627347409" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.095857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"28e4d8051962f084ad34adf0e4accf24f2f0b807f4cf486a0ef55cc317a14981"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.095895 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"305de19333697edf3cbd5089537e533c1df0304cbf48e3dcd48b132c83622e2e"} Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.108884 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-97fb-account-create-update-htb6t" podStartSLOduration=3.108867967 podStartE2EDuration="3.108867967s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.106659667 +0000 UTC m=+1412.640430430" watchObservedRunningTime="2025-12-06 05:51:42.108867967 +0000 UTC m=+1412.642638730" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.122238 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-g92gz" podStartSLOduration=3.122224205 podStartE2EDuration="3.122224205s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.11944792 +0000 UTC m=+1412.653218683" watchObservedRunningTime="2025-12-06 05:51:42.122224205 +0000 UTC m=+1412.655994968" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.138951 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-3343-account-create-update-kng7l" podStartSLOduration=3.138934544 podStartE2EDuration="3.138934544s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.137648469 +0000 UTC m=+1412.671419232" watchObservedRunningTime="2025-12-06 05:51:42.138934544 +0000 UTC m=+1412.672705307" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.182380 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-52pk5" podStartSLOduration=3.182358449 podStartE2EDuration="3.182358449s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:51:42.152731774 +0000 UTC m=+1412.686502537" watchObservedRunningTime="2025-12-06 05:51:42.182358449 +0000 UTC m=+1412.716129212" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.470325 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-jdfnk"] Dec 06 05:51:42 crc kubenswrapper[4958]: E1206 05:51:42.470745 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f779858-5c94-4cbe-a940-d81de4d26b69" containerName="mariadb-database-create" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.470766 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f779858-5c94-4cbe-a940-d81de4d26b69" containerName="mariadb-database-create" Dec 06 05:51:42 crc kubenswrapper[4958]: E1206 05:51:42.470776 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" containerName="mariadb-account-create-update" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.470783 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" containerName="mariadb-account-create-update" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.471173 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" containerName="mariadb-account-create-update" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.471199 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f779858-5c94-4cbe-a940-d81de4d26b69" containerName="mariadb-database-create" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.471950 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.475930 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wlmlr" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.476131 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.493385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.493502 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.493555 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.493629 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bptxz\" (UniqueName: \"kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.495659 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jdfnk"] Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.594720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bptxz\" (UniqueName: \"kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.595180 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.595234 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.595275 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.610095 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.610138 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.610711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.614863 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bptxz\" (UniqueName: \"kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz\") pod \"glance-db-sync-jdfnk\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:42 crc kubenswrapper[4958]: I1206 05:51:42.788750 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jdfnk" Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.116484 4958 generic.go:334] "Generic (PLEG): container finished" podID="94f50bb5-d668-42ce-b9da-9364fcf27a33" containerID="e3ab2abb39669337388050cf9866bbd388d923c83c3e31f0c0be4632e8513b7d" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.116538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52pk5" event={"ID":"94f50bb5-d668-42ce-b9da-9364fcf27a33","Type":"ContainerDied","Data":"e3ab2abb39669337388050cf9866bbd388d923c83c3e31f0c0be4632e8513b7d"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.119227 4958 generic.go:334] "Generic (PLEG): container finished" podID="437479ce-fa34-40d2-af1c-a611eaaecc20" containerID="4f03ec094cabc830abc0a6f6b61d702f067f9049b81fee717461cc82a365d74f" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.119335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-97fb-account-create-update-htb6t" event={"ID":"437479ce-fa34-40d2-af1c-a611eaaecc20","Type":"ContainerDied","Data":"4f03ec094cabc830abc0a6f6b61d702f067f9049b81fee717461cc82a365d74f"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.123986 4958 generic.go:334] "Generic (PLEG): container finished" podID="3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" containerID="013af68c6551086becee1b006abef6d349e5b62191a009e81d29bd4e3334b796" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.124042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7910-account-create-update-ng4vj" event={"ID":"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1","Type":"ContainerDied","Data":"013af68c6551086becee1b006abef6d349e5b62191a009e81d29bd4e3334b796"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.125448 4958 generic.go:334] "Generic (PLEG): container finished" podID="261f5bc1-8806-430c-bbca-2142d542071d" containerID="35c3a25e2bf0e6f0d21e00c4854df1ca26f01197af270b66aa3fd10f1e0c8e76" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.125507 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qzlg2" event={"ID":"261f5bc1-8806-430c-bbca-2142d542071d","Type":"ContainerDied","Data":"35c3a25e2bf0e6f0d21e00c4854df1ca26f01197af270b66aa3fd10f1e0c8e76"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.134065 4958 generic.go:334] "Generic (PLEG): container finished" podID="48ea1e14-d33a-45cd-bc32-655d29f95017" containerID="8cadc66ee34e77dd52f36424defacf2c15137c82c2739b6501f014b8bf5edba8" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.134135 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g92gz" event={"ID":"48ea1e14-d33a-45cd-bc32-655d29f95017","Type":"ContainerDied","Data":"8cadc66ee34e77dd52f36424defacf2c15137c82c2739b6501f014b8bf5edba8"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.148797 4958 generic.go:334] "Generic (PLEG): container finished" podID="5c9f41cd-4696-4b14-a48d-b202f0d6796b" containerID="f310b8d3468992f4cd4c60571fca9d866ff7bd03f0c133d7d908bf398ab47d30" exitCode=0 Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.148873 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3343-account-create-update-kng7l" event={"ID":"5c9f41cd-4696-4b14-a48d-b202f0d6796b","Type":"ContainerDied","Data":"f310b8d3468992f4cd4c60571fca9d866ff7bd03f0c133d7d908bf398ab47d30"} Dec 06 05:51:43 crc kubenswrapper[4958]: I1206 05:51:43.158187 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"aa664105a44caf55c8295324aa9e046e82d4489951d35bb0dc8ebe95444dbfa1"} Dec 06 05:51:45 crc kubenswrapper[4958]: I1206 05:51:45.104274 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-jdfnk"] Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.211571 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.219949 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.228844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7910-account-create-update-ng4vj" event={"ID":"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1","Type":"ContainerDied","Data":"64cef408f03fb2dec890096feb221cb20068ac1d8d579a4c8b3ea7f85a14b3f9"} Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.228886 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64cef408f03fb2dec890096feb221cb20068ac1d8d579a4c8b3ea7f85a14b3f9" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.228955 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7910-account-create-update-ng4vj" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.238223 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-3343-account-create-update-kng7l" event={"ID":"5c9f41cd-4696-4b14-a48d-b202f0d6796b","Type":"ContainerDied","Data":"65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88"} Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.238272 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65b64c6ccecc0d8e41610b9005f32367cf5ede9f82dbacbecce2a0296f7f5f88" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.238661 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-3343-account-create-update-kng7l" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.379816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqk4w\" (UniqueName: \"kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w\") pod \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.379904 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s8h2q\" (UniqueName: \"kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q\") pod \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.379986 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts\") pod \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\" (UID: \"3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1\") " Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.380024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts\") pod \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\" (UID: \"5c9f41cd-4696-4b14-a48d-b202f0d6796b\") " Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.381270 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" (UID: "3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.381277 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5c9f41cd-4696-4b14-a48d-b202f0d6796b" (UID: "5c9f41cd-4696-4b14-a48d-b202f0d6796b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.387706 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q" (OuterVolumeSpecName: "kube-api-access-s8h2q") pod "5c9f41cd-4696-4b14-a48d-b202f0d6796b" (UID: "5c9f41cd-4696-4b14-a48d-b202f0d6796b"). InnerVolumeSpecName "kube-api-access-s8h2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.387784 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w" (OuterVolumeSpecName: "kube-api-access-kqk4w") pod "3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" (UID: "3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1"). InnerVolumeSpecName "kube-api-access-kqk4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.482533 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqk4w\" (UniqueName: \"kubernetes.io/projected/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-kube-api-access-kqk4w\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.482589 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s8h2q\" (UniqueName: \"kubernetes.io/projected/5c9f41cd-4696-4b14-a48d-b202f0d6796b-kube-api-access-s8h2q\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.482604 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:47 crc kubenswrapper[4958]: I1206 05:51:47.482616 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5c9f41cd-4696-4b14-a48d-b202f0d6796b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.684686 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.718365 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.722744 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.738069 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.781141 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts\") pod \"94f50bb5-d668-42ce-b9da-9364fcf27a33\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.781258 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c94rm\" (UniqueName: \"kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm\") pod \"94f50bb5-d668-42ce-b9da-9364fcf27a33\" (UID: \"94f50bb5-d668-42ce-b9da-9364fcf27a33\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.782228 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "94f50bb5-d668-42ce-b9da-9364fcf27a33" (UID: "94f50bb5-d668-42ce-b9da-9364fcf27a33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.785504 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/94f50bb5-d668-42ce-b9da-9364fcf27a33-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.793085 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm" (OuterVolumeSpecName: "kube-api-access-c94rm") pod "94f50bb5-d668-42ce-b9da-9364fcf27a33" (UID: "94f50bb5-d668-42ce-b9da-9364fcf27a33"). InnerVolumeSpecName "kube-api-access-c94rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886364 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jdnf\" (UniqueName: \"kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf\") pod \"48ea1e14-d33a-45cd-bc32-655d29f95017\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886401 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts\") pod \"261f5bc1-8806-430c-bbca-2142d542071d\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886433 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqhlm\" (UniqueName: \"kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm\") pod \"437479ce-fa34-40d2-af1c-a611eaaecc20\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886562 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts\") pod \"437479ce-fa34-40d2-af1c-a611eaaecc20\" (UID: \"437479ce-fa34-40d2-af1c-a611eaaecc20\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886671 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xs8md\" (UniqueName: \"kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md\") pod \"261f5bc1-8806-430c-bbca-2142d542071d\" (UID: \"261f5bc1-8806-430c-bbca-2142d542071d\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.886781 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts\") pod \"48ea1e14-d33a-45cd-bc32-655d29f95017\" (UID: \"48ea1e14-d33a-45cd-bc32-655d29f95017\") " Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.887140 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c94rm\" (UniqueName: \"kubernetes.io/projected/94f50bb5-d668-42ce-b9da-9364fcf27a33-kube-api-access-c94rm\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.887812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ea1e14-d33a-45cd-bc32-655d29f95017" (UID: "48ea1e14-d33a-45cd-bc32-655d29f95017"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.888050 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "437479ce-fa34-40d2-af1c-a611eaaecc20" (UID: "437479ce-fa34-40d2-af1c-a611eaaecc20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.889106 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "261f5bc1-8806-430c-bbca-2142d542071d" (UID: "261f5bc1-8806-430c-bbca-2142d542071d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.891198 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf" (OuterVolumeSpecName: "kube-api-access-7jdnf") pod "48ea1e14-d33a-45cd-bc32-655d29f95017" (UID: "48ea1e14-d33a-45cd-bc32-655d29f95017"). InnerVolumeSpecName "kube-api-access-7jdnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.891675 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md" (OuterVolumeSpecName: "kube-api-access-xs8md") pod "261f5bc1-8806-430c-bbca-2142d542071d" (UID: "261f5bc1-8806-430c-bbca-2142d542071d"). InnerVolumeSpecName "kube-api-access-xs8md". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.891762 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm" (OuterVolumeSpecName: "kube-api-access-lqhlm") pod "437479ce-fa34-40d2-af1c-a611eaaecc20" (UID: "437479ce-fa34-40d2-af1c-a611eaaecc20"). InnerVolumeSpecName "kube-api-access-lqhlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989316 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jdnf\" (UniqueName: \"kubernetes.io/projected/48ea1e14-d33a-45cd-bc32-655d29f95017-kube-api-access-7jdnf\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989348 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/261f5bc1-8806-430c-bbca-2142d542071d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989358 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqhlm\" (UniqueName: \"kubernetes.io/projected/437479ce-fa34-40d2-af1c-a611eaaecc20-kube-api-access-lqhlm\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989367 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/437479ce-fa34-40d2-af1c-a611eaaecc20-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989374 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xs8md\" (UniqueName: \"kubernetes.io/projected/261f5bc1-8806-430c-bbca-2142d542071d-kube-api-access-xs8md\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:51 crc kubenswrapper[4958]: I1206 05:51:51.989383 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ea1e14-d33a-45cd-bc32-655d29f95017-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.300419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"9dfd4b8874bc4b356fc13494920546a99e7ed72004c519c12695740a6c263586"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.301831 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-52pk5" event={"ID":"94f50bb5-d668-42ce-b9da-9364fcf27a33","Type":"ContainerDied","Data":"af697e57da38481791a50b81aa8008397b844a39b6b0210318ab895ede3b05fb"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.301866 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af697e57da38481791a50b81aa8008397b844a39b6b0210318ab895ede3b05fb" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.301876 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-52pk5" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.303562 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-97fb-account-create-update-htb6t" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.303577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-97fb-account-create-update-htb6t" event={"ID":"437479ce-fa34-40d2-af1c-a611eaaecc20","Type":"ContainerDied","Data":"16ae239883505be404348484231ee36354184c79047f4b9e85ad892765e1fbe2"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.303617 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ae239883505be404348484231ee36354184c79047f4b9e85ad892765e1fbe2" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.304876 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-qzlg2" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.304867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-qzlg2" event={"ID":"261f5bc1-8806-430c-bbca-2142d542071d","Type":"ContainerDied","Data":"367496ca718cfd816390886ffbb61892cfe933eb9d8fd4a0b77447990dd1acc5"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.304983 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367496ca718cfd816390886ffbb61892cfe933eb9d8fd4a0b77447990dd1acc5" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.314762 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jdfnk" event={"ID":"9d1dc22d-53a9-4aee-989b-fc253cd276cd","Type":"ContainerStarted","Data":"b041d8d1f976e0e77673295062b199a4d5d240851a73287b6a987e4c451ebdd8"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.316489 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-g92gz" event={"ID":"48ea1e14-d33a-45cd-bc32-655d29f95017","Type":"ContainerDied","Data":"63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be"} Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.316510 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63e6acf1c74e17da135ade6e4d845f84762faabc055d51fbdf7d61254e7bb7be" Dec 06 05:51:52 crc kubenswrapper[4958]: I1206 05:51:52.316594 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-g92gz" Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.331985 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7mfjj" event={"ID":"35cf87c8-3462-476a-b396-26a24e954229","Type":"ContainerStarted","Data":"7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3"} Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.334367 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l4znm" event={"ID":"7b244aca-d463-42f1-b8f9-d96dca44f635","Type":"ContainerStarted","Data":"19985583f93e44de7c0b244fde720dd3c31cec7252770a6aba20b5f0c149d448"} Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.348884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"4a73afe45625dcc346960af4119e50556610bd4be051cc996ecfb931dc67fe4f"} Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.348925 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"222ae01c65b8099ac6076b0d386c91efce164c097c6a4dd99684506006383d5f"} Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.385318 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-db-sync-7mfjj" podStartSLOduration=2.650200674 podStartE2EDuration="14.385299897s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="2025-12-06 05:51:40.743392544 +0000 UTC m=+1411.277163307" lastFinishedPulling="2025-12-06 05:51:52.478491767 +0000 UTC m=+1423.012262530" observedRunningTime="2025-12-06 05:51:53.380779726 +0000 UTC m=+1423.914550489" watchObservedRunningTime="2025-12-06 05:51:53.385299897 +0000 UTC m=+1423.919070660" Dec 06 05:51:53 crc kubenswrapper[4958]: I1206 05:51:53.402668 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-l4znm" podStartSLOduration=2.846046592 podStartE2EDuration="14.402651264s" podCreationTimestamp="2025-12-06 05:51:39 +0000 UTC" firstStartedPulling="2025-12-06 05:51:40.903186213 +0000 UTC m=+1411.436956976" lastFinishedPulling="2025-12-06 05:51:52.459790885 +0000 UTC m=+1422.993561648" observedRunningTime="2025-12-06 05:51:53.401625516 +0000 UTC m=+1423.935396279" watchObservedRunningTime="2025-12-06 05:51:53.402651264 +0000 UTC m=+1423.936422027" Dec 06 05:51:54 crc kubenswrapper[4958]: I1206 05:51:54.371186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"9f9a9770393e3216fd9dddcf047956015bf31656c26f7550057b0d7105fc0dc1"} Dec 06 05:51:59 crc kubenswrapper[4958]: I1206 05:51:59.415987 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"921fec4b86b9d9d708bb5ffce5646df25c7daf922b7da0211ee30a767012851b"} Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.432419 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"f1cc153bf434041613c76eee9be93b93f33da95a7456dc8adaf4eac23ba0b52d"} Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.432963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"d8c3d892-a529-436f-b8f1-3bb2a4ffbed2","Type":"ContainerStarted","Data":"03a07f66a1ac26d75ebe5621d6dd08717ae4b62fb27e6b5e8d90fb5367e8fa28"} Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.476732 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=51.384140294 podStartE2EDuration="59.476712224s" podCreationTimestamp="2025-12-06 05:51:01 +0000 UTC" firstStartedPulling="2025-12-06 05:51:35.954208084 +0000 UTC m=+1406.487978847" lastFinishedPulling="2025-12-06 05:51:44.046780014 +0000 UTC m=+1414.580550777" observedRunningTime="2025-12-06 05:52:00.462160724 +0000 UTC m=+1430.995931487" watchObservedRunningTime="2025-12-06 05:52:00.476712224 +0000 UTC m=+1431.010482987" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.753627 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754103 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ea1e14-d33a-45cd-bc32-655d29f95017" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754125 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ea1e14-d33a-45cd-bc32-655d29f95017" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754148 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="437479ce-fa34-40d2-af1c-a611eaaecc20" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754158 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="437479ce-fa34-40d2-af1c-a611eaaecc20" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754178 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754189 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754217 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261f5bc1-8806-430c-bbca-2142d542071d" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754224 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="261f5bc1-8806-430c-bbca-2142d542071d" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754233 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94f50bb5-d668-42ce-b9da-9364fcf27a33" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754239 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="94f50bb5-d668-42ce-b9da-9364fcf27a33" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: E1206 05:52:00.754252 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c9f41cd-4696-4b14-a48d-b202f0d6796b" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754259 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c9f41cd-4696-4b14-a48d-b202f0d6796b" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754506 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c9f41cd-4696-4b14-a48d-b202f0d6796b" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754524 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ea1e14-d33a-45cd-bc32-655d29f95017" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754537 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="261f5bc1-8806-430c-bbca-2142d542071d" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754557 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="94f50bb5-d668-42ce-b9da-9364fcf27a33" containerName="mariadb-database-create" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754569 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.754590 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="437479ce-fa34-40d2-af1c-a611eaaecc20" containerName="mariadb-account-create-update" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.755768 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.757503 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.774903 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qtdt\" (UniqueName: \"kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852260 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852405 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852444 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.852491 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953689 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7qtdt\" (UniqueName: \"kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953790 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953830 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953896 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.953922 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.954876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.955512 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.955603 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.956283 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.956450 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:00 crc kubenswrapper[4958]: I1206 05:52:00.979837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7qtdt\" (UniqueName: \"kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt\") pod \"dnsmasq-dns-55b99bf79c-kcrvt\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:01 crc kubenswrapper[4958]: I1206 05:52:01.085384 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:01 crc kubenswrapper[4958]: I1206 05:52:01.612012 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:52:02 crc kubenswrapper[4958]: I1206 05:52:02.450461 4958 generic.go:334] "Generic (PLEG): container finished" podID="d2b3678d-be78-4e2d-930a-866c6d404166" containerID="0fc04b32a3f19c540ccab940359d82f28d1357937cc29f79c5973795b691607f" exitCode=0 Dec 06 05:52:02 crc kubenswrapper[4958]: I1206 05:52:02.450529 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" event={"ID":"d2b3678d-be78-4e2d-930a-866c6d404166","Type":"ContainerDied","Data":"0fc04b32a3f19c540ccab940359d82f28d1357937cc29f79c5973795b691607f"} Dec 06 05:52:02 crc kubenswrapper[4958]: I1206 05:52:02.451010 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" event={"ID":"d2b3678d-be78-4e2d-930a-866c6d404166","Type":"ContainerStarted","Data":"0fd4f3bed645166ebaf5417e14ae1f62dadbffa0e4d05f2195645b7567bc7b1e"} Dec 06 05:52:06 crc kubenswrapper[4958]: I1206 05:52:06.506757 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" event={"ID":"d2b3678d-be78-4e2d-930a-866c6d404166","Type":"ContainerStarted","Data":"c42009c782b2b103b1086d456f9a5f9bb0d1ca441436d8928baca766619079b1"} Dec 06 05:52:09 crc kubenswrapper[4958]: I1206 05:52:09.534108 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:09 crc kubenswrapper[4958]: I1206 05:52:09.558939 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podStartSLOduration=9.558921125 podStartE2EDuration="9.558921125s" podCreationTimestamp="2025-12-06 05:52:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:52:09.554544488 +0000 UTC m=+1440.088315251" watchObservedRunningTime="2025-12-06 05:52:09.558921125 +0000 UTC m=+1440.092691888" Dec 06 05:52:09 crc kubenswrapper[4958]: I1206 05:52:09.866298 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:52:09 crc kubenswrapper[4958]: I1206 05:52:09.866356 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:52:10 crc kubenswrapper[4958]: I1206 05:52:10.542616 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:52:10 crc kubenswrapper[4958]: I1206 05:52:10.615015 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:52:10 crc kubenswrapper[4958]: I1206 05:52:10.615313 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" containerID="cri-o://f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6" gracePeriod=10 Dec 06 05:52:12 crc kubenswrapper[4958]: I1206 05:52:12.263931 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.278390 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.127:5353: connect: connection refused" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.372861 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-glance-api:current" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.372918 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-glance-api:current" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.373059 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:glance-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-glance-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/glance/glance.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bptxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42415,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42415,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-db-sync-jdfnk_openstack(9d1dc22d-53a9-4aee-989b-fc253cd276cd): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.376225 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/glance-db-sync-jdfnk" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.573486 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.613037 4958 generic.go:334] "Generic (PLEG): container finished" podID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerID="f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6" exitCode=0 Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.613722 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.613871 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" event={"ID":"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c","Type":"ContainerDied","Data":"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6"} Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.613896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76f9c4c8bc-96l4z" event={"ID":"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c","Type":"ContainerDied","Data":"e4f39b1e83b4b3bfe95feb5d858a95e3f858f5e13a46f48742ddf413cb5ebc20"} Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.613913 4958 scope.go:117] "RemoveContainer" containerID="f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.614705 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"glance-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-glance-api:current\\\"\"" pod="openstack/glance-db-sync-jdfnk" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.642532 4958 scope.go:117] "RemoveContainer" containerID="91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.661199 4958 scope.go:117] "RemoveContainer" containerID="f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.663301 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6\": container with ID starting with f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6 not found: ID does not exist" containerID="f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.663340 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6"} err="failed to get container status \"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6\": rpc error: code = NotFound desc = could not find container \"f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6\": container with ID starting with f9efd5018e955818c5c03e105d4b290a348a0e3d5ff159604435a53cef91aab6 not found: ID does not exist" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.663384 4958 scope.go:117] "RemoveContainer" containerID="91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f" Dec 06 05:52:17 crc kubenswrapper[4958]: E1206 05:52:17.663768 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f\": container with ID starting with 91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f not found: ID does not exist" containerID="91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.663806 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f"} err="failed to get container status \"91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f\": rpc error: code = NotFound desc = could not find container \"91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f\": container with ID starting with 91a8c6f2fef855c57e9caac98d95686c7ede579f9977179e48c85d5d4bcaf13f not found: ID does not exist" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.696283 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb\") pod \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.696383 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config\") pod \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.696425 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb\") pod \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.696631 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dn5jr\" (UniqueName: \"kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr\") pod \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.697009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc\") pod \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\" (UID: \"bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c\") " Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.703608 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr" (OuterVolumeSpecName: "kube-api-access-dn5jr") pod "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" (UID: "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c"). InnerVolumeSpecName "kube-api-access-dn5jr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.750130 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" (UID: "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.754558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" (UID: "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.760811 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" (UID: "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.767797 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config" (OuterVolumeSpecName: "config") pod "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" (UID: "bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.799416 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dn5jr\" (UniqueName: \"kubernetes.io/projected/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-kube-api-access-dn5jr\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.799447 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.799457 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.799484 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.799495 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.936491 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:52:17 crc kubenswrapper[4958]: I1206 05:52:17.943957 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-76f9c4c8bc-96l4z"] Dec 06 05:52:19 crc kubenswrapper[4958]: I1206 05:52:19.775379 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" path="/var/lib/kubelet/pods/bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c/volumes" Dec 06 05:52:20 crc kubenswrapper[4958]: E1206 05:52:20.862187 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cf87c8_3462_476a_b396_26a24e954229.slice/crio-conmon-7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod35cf87c8_3462_476a_b396_26a24e954229.slice/crio-7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:52:21 crc kubenswrapper[4958]: I1206 05:52:21.648860 4958 generic.go:334] "Generic (PLEG): container finished" podID="35cf87c8-3462-476a-b396-26a24e954229" containerID="7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3" exitCode=0 Dec 06 05:52:21 crc kubenswrapper[4958]: I1206 05:52:21.648998 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7mfjj" event={"ID":"35cf87c8-3462-476a-b396-26a24e954229","Type":"ContainerDied","Data":"7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3"} Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.036875 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.184996 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7x4s\" (UniqueName: \"kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s\") pod \"35cf87c8-3462-476a-b396-26a24e954229\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.185056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle\") pod \"35cf87c8-3462-476a-b396-26a24e954229\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.185218 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data\") pod \"35cf87c8-3462-476a-b396-26a24e954229\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.185255 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data\") pod \"35cf87c8-3462-476a-b396-26a24e954229\" (UID: \"35cf87c8-3462-476a-b396-26a24e954229\") " Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.191226 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s" (OuterVolumeSpecName: "kube-api-access-j7x4s") pod "35cf87c8-3462-476a-b396-26a24e954229" (UID: "35cf87c8-3462-476a-b396-26a24e954229"). InnerVolumeSpecName "kube-api-access-j7x4s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.192170 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "35cf87c8-3462-476a-b396-26a24e954229" (UID: "35cf87c8-3462-476a-b396-26a24e954229"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.212896 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35cf87c8-3462-476a-b396-26a24e954229" (UID: "35cf87c8-3462-476a-b396-26a24e954229"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.236928 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data" (OuterVolumeSpecName: "config-data") pod "35cf87c8-3462-476a-b396-26a24e954229" (UID: "35cf87c8-3462-476a-b396-26a24e954229"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.287557 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.287696 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.287763 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/35cf87c8-3462-476a-b396-26a24e954229-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.287777 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7x4s\" (UniqueName: \"kubernetes.io/projected/35cf87c8-3462-476a-b396-26a24e954229-kube-api-access-j7x4s\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.667572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-db-sync-7mfjj" event={"ID":"35cf87c8-3462-476a-b396-26a24e954229","Type":"ContainerDied","Data":"f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0"} Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.667618 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7e7932b15fc1e3167322e483fcfffb32b17558a6d3db25bf53a24c252c562d0" Dec 06 05:52:23 crc kubenswrapper[4958]: I1206 05:52:23.667662 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-db-sync-7mfjj" Dec 06 05:52:30 crc kubenswrapper[4958]: I1206 05:52:30.745781 4958 generic.go:334] "Generic (PLEG): container finished" podID="7b244aca-d463-42f1-b8f9-d96dca44f635" containerID="19985583f93e44de7c0b244fde720dd3c31cec7252770a6aba20b5f0c149d448" exitCode=0 Dec 06 05:52:30 crc kubenswrapper[4958]: I1206 05:52:30.746140 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l4znm" event={"ID":"7b244aca-d463-42f1-b8f9-d96dca44f635","Type":"ContainerDied","Data":"19985583f93e44de7c0b244fde720dd3c31cec7252770a6aba20b5f0c149d448"} Dec 06 05:52:30 crc kubenswrapper[4958]: I1206 05:52:30.764000 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.085471 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l4znm" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.231260 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data\") pod \"7b244aca-d463-42f1-b8f9-d96dca44f635\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.231359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgdgw\" (UniqueName: \"kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw\") pod \"7b244aca-d463-42f1-b8f9-d96dca44f635\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.231581 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle\") pod \"7b244aca-d463-42f1-b8f9-d96dca44f635\" (UID: \"7b244aca-d463-42f1-b8f9-d96dca44f635\") " Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.237549 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw" (OuterVolumeSpecName: "kube-api-access-hgdgw") pod "7b244aca-d463-42f1-b8f9-d96dca44f635" (UID: "7b244aca-d463-42f1-b8f9-d96dca44f635"). InnerVolumeSpecName "kube-api-access-hgdgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.258305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7b244aca-d463-42f1-b8f9-d96dca44f635" (UID: "7b244aca-d463-42f1-b8f9-d96dca44f635"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.282642 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data" (OuterVolumeSpecName: "config-data") pod "7b244aca-d463-42f1-b8f9-d96dca44f635" (UID: "7b244aca-d463-42f1-b8f9-d96dca44f635"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.333261 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.333304 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7b244aca-d463-42f1-b8f9-d96dca44f635-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.333315 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgdgw\" (UniqueName: \"kubernetes.io/projected/7b244aca-d463-42f1-b8f9-d96dca44f635-kube-api-access-hgdgw\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.765748 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jdfnk" event={"ID":"9d1dc22d-53a9-4aee-989b-fc253cd276cd","Type":"ContainerStarted","Data":"4fb815c2a706f214a8537ed307b11c2a0bb67231931011937e5d016c3fe89571"} Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.767674 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-l4znm" event={"ID":"7b244aca-d463-42f1-b8f9-d96dca44f635","Type":"ContainerDied","Data":"79947882f96590b3a4ccccac8710ee3370f67445afc0fe9db3b0602967d9263b"} Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.767718 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-l4znm" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.767721 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="79947882f96590b3a4ccccac8710ee3370f67445afc0fe9db3b0602967d9263b" Dec 06 05:52:32 crc kubenswrapper[4958]: I1206 05:52:32.801957 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-jdfnk" podStartSLOduration=11.281644239 podStartE2EDuration="50.801937062s" podCreationTimestamp="2025-12-06 05:51:42 +0000 UTC" firstStartedPulling="2025-12-06 05:51:51.631584045 +0000 UTC m=+1422.165354808" lastFinishedPulling="2025-12-06 05:52:31.151876868 +0000 UTC m=+1461.685647631" observedRunningTime="2025-12-06 05:52:32.797850792 +0000 UTC m=+1463.331621565" watchObservedRunningTime="2025-12-06 05:52:32.801937062 +0000 UTC m=+1463.335707825" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.037830 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:33 crc kubenswrapper[4958]: E1206 05:52:33.038556 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b244aca-d463-42f1-b8f9-d96dca44f635" containerName="keystone-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038600 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b244aca-d463-42f1-b8f9-d96dca44f635" containerName="keystone-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: E1206 05:52:33.038625 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038632 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" Dec 06 05:52:33 crc kubenswrapper[4958]: E1206 05:52:33.038645 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="init" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038651 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="init" Dec 06 05:52:33 crc kubenswrapper[4958]: E1206 05:52:33.038660 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35cf87c8-3462-476a-b396-26a24e954229" containerName="watcher-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038667 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="35cf87c8-3462-476a-b396-26a24e954229" containerName="watcher-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038830 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd2c4c39-fb58-4561-9bc6-1a18dfb7af9c" containerName="dnsmasq-dns" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038851 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b244aca-d463-42f1-b8f9-d96dca44f635" containerName="keystone-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.038878 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="35cf87c8-3462-476a-b396-26a24e954229" containerName="watcher-db-sync" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.039895 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.053981 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.094002 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-pl6mr"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.095221 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.100385 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbqwc" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.100516 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.100625 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.100783 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.100999 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.123826 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pl6mr"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146795 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146875 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146896 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146917 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146935 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.146965 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k26lm\" (UniqueName: \"kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.181593 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.183393 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.185858 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-watcher-dockercfg-2dxx6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.186049 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.252995 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m79p\" (UniqueName: \"kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253075 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7x7f\" (UniqueName: \"kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253104 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253144 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253183 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253302 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k26lm\" (UniqueName: \"kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253338 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253364 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253394 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253448 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253544 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253612 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253642 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.253700 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.254952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.255286 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.255446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.255760 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.260149 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.296653 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.312417 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k26lm\" (UniqueName: \"kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm\") pod \"dnsmasq-dns-58bbf48b7f-w8tj6\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.351894 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.353479 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.354825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m79p\" (UniqueName: \"kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.354902 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7x7f\" (UniqueName: \"kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.354921 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.354974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.354992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355115 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355131 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.355152 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.361157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.364163 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.368002 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.368234 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.368319 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.369559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.371535 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.371989 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.372976 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.374804 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.384382 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.385848 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.391740 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-lmfxp"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.393127 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.397181 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.398275 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.398486 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.398763 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.398901 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j4btr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.403337 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.405526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.407510 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7x7f\" (UniqueName: \"kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f\") pod \"keystone-bootstrap-pl6mr\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.407780 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.413513 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-ww8hk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.413669 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.413745 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.417023 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.422387 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.424860 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m79p\" (UniqueName: \"kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p\") pod \"watcher-api-0\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.426503 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.440793 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lmfxp"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456449 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456535 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456579 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456601 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62lrs\" (UniqueName: \"kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsh7t\" (UniqueName: \"kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456661 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456718 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456757 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wcwz\" (UniqueName: \"kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456806 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456847 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456873 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.456908 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.457052 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.485171 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.487255 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.495891 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.496077 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.513838 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.540744 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.558677 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.558879 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.558956 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559099 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmxn\" (UniqueName: \"kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559424 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559466 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559498 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559539 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559568 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559591 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62lrs\" (UniqueName: \"kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsh7t\" (UniqueName: \"kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559698 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559730 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5tpw\" (UniqueName: \"kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559854 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wcwz\" (UniqueName: \"kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.559990 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.576037 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2scr7"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.577416 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.583377 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.583489 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.583661 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nphrn" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.583660 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.586432 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.592419 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.593259 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.597185 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.600490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.601358 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.601824 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.613354 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.614035 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsh7t\" (UniqueName: \"kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t\") pod \"watcher-decision-engine-0\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.614224 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wcwz\" (UniqueName: \"kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz\") pod \"neutron-db-sync-lmfxp\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.618585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62lrs\" (UniqueName: \"kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs\") pod \"watcher-applier-0\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.642208 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-znqzx"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.644316 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.655406 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.655648 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-55pkr" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.659087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.663909 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664552 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664607 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664714 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664758 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5tpw\" (UniqueName: \"kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5kss\" (UniqueName: \"kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664829 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.664898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665019 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665206 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665255 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665294 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmxn\" (UniqueName: \"kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665328 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665407 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.665672 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.675761 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.675998 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2scr7"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.676779 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.677133 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.678403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.683665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.689340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.690128 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.710658 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.712770 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmxn\" (UniqueName: \"kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn\") pod \"horizon-6f7b4cd5bf-88npk\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.713049 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.720911 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5tpw\" (UniqueName: \"kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.727416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data\") pod \"ceilometer-0\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.755875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.779583 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.779636 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.779725 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4q92\" (UniqueName: \"kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.779785 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.780241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.780322 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.780362 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5kss\" (UniqueName: \"kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.780384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.780408 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.786361 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.788688 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.798885 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.799438 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.813950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.814460 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.827230 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5kss\" (UniqueName: \"kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss\") pod \"cinder-db-sync-2scr7\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.845447 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2scr7" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.846516 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-znqzx"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.846731 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.848162 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-hn59h"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.848854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.849233 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.854519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.855514 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.863305 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-czzwd" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.879066 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.881874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.883099 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.885309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4q92\" (UniqueName: \"kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.889421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.896746 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.897365 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.908196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4q92\" (UniqueName: \"kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92\") pod \"barbican-db-sync-znqzx\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.916998 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.917921 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.919635 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.926007 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn59h"] Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhrvv\" (UniqueName: \"kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988463 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988489 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t78d6\" (UniqueName: \"kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988660 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988679 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988720 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988753 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988791 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6bqr\" (UniqueName: \"kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988866 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.988908 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:33 crc kubenswrapper[4958]: I1206 05:52:33.989104 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.089951 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vhrvv\" (UniqueName: \"kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090225 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090246 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090262 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t78d6\" (UniqueName: \"kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090345 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090376 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090396 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090414 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6bqr\" (UniqueName: \"kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090480 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090510 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.090543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.091291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.091581 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.092276 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.092376 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.097559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.097828 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.098435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.098697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.104477 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.110082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.120585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6bqr\" (UniqueName: \"kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr\") pod \"dnsmasq-dns-578598f949-w55ph\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.134329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.135208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t78d6\" (UniqueName: \"kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.136756 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.141264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vhrvv\" (UniqueName: \"kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv\") pod \"horizon-7b7f9bd7cc-jjrbv\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.142321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle\") pod \"placement-db-sync-hn59h\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.167926 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znqzx" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.191850 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn59h" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.211936 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-pl6mr"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.213390 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.276964 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.289775 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:52:34 crc kubenswrapper[4958]: W1206 05:52:34.385725 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2af10ec_6c9f_4eb8_9665_13449df417ab.slice/crio-25f8fc7f4bd9b1509988a76673c8c38f51ce875974450475735ef45ac679b823 WatchSource:0}: Error finding container 25f8fc7f4bd9b1509988a76673c8c38f51ce875974450475735ef45ac679b823: Status 404 returned error can't find the container with id 25f8fc7f4bd9b1509988a76673c8c38f51ce875974450475735ef45ac679b823 Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.497003 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.781764 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:52:34 crc kubenswrapper[4958]: W1206 05:52:34.793876 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod80bf896b_e5d4_4497_b10a_ae05d7be8886.slice/crio-ce2c036823a3b6c0fa0f646bc6d9058db1f9745089a9c9b6f4f5611b7c3cccbc WatchSource:0}: Error finding container ce2c036823a3b6c0fa0f646bc6d9058db1f9745089a9c9b6f4f5611b7c3cccbc: Status 404 returned error can't find the container with id ce2c036823a3b6c0fa0f646bc6d9058db1f9745089a9c9b6f4f5611b7c3cccbc Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.863699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f7b4cd5bf-88npk" event={"ID":"80bf896b-e5d4-4497-b10a-ae05d7be8886","Type":"ContainerStarted","Data":"ce2c036823a3b6c0fa0f646bc6d9058db1f9745089a9c9b6f4f5611b7c3cccbc"} Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.865672 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerStarted","Data":"5732e5cd9fe9e4d91786387662205d3d788590ff147f11ca51bbb584c1f814f3"} Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.866937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pl6mr" event={"ID":"3ef63ff9-e0db-4b73-881b-2ad904bcaac5","Type":"ContainerStarted","Data":"630829540b25dc1ec5c38ea255b9f32dff0a8be9703a2293498b0ad375315ab6"} Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.868155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" event={"ID":"d2af10ec-6c9f-4eb8-9665-13449df417ab","Type":"ContainerStarted","Data":"25f8fc7f4bd9b1509988a76673c8c38f51ce875974450475735ef45ac679b823"} Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.906199 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:52:34 crc kubenswrapper[4958]: W1206 05:52:34.913181 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod92bfdbb2_cdd9_49b3_80cb_5aa52422d18e.slice/crio-12b99619f6a130a37cda13a3b81c297cbf2cb61bccfee475929ae11e3a54aebc WatchSource:0}: Error finding container 12b99619f6a130a37cda13a3b81c297cbf2cb61bccfee475929ae11e3a54aebc: Status 404 returned error can't find the container with id 12b99619f6a130a37cda13a3b81c297cbf2cb61bccfee475929ae11e3a54aebc Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.914156 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.958889 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:52:34 crc kubenswrapper[4958]: I1206 05:52:34.964339 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-lmfxp"] Dec 06 05:52:35 crc kubenswrapper[4958]: W1206 05:52:35.090702 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94a6d712_4bb0_458b_878a_99dd8d47a8f9.slice/crio-31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f WatchSource:0}: Error finding container 31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f: Status 404 returned error can't find the container with id 31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.096689 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-znqzx"] Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.106718 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2scr7"] Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.195282 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-hn59h"] Dec 06 05:52:35 crc kubenswrapper[4958]: W1206 05:52:35.204644 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7f28060d_c759_4b4b_a643_bf8acb76c1b2.slice/crio-f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba WatchSource:0}: Error finding container f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba: Status 404 returned error can't find the container with id f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba Dec 06 05:52:35 crc kubenswrapper[4958]: W1206 05:52:35.206266 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod27d884d6_231d_4782_bb16_d2664b44e18f.slice/crio-fb4108ef28b345a839ced74b87486f897a9051d1512b431604401bf163f4a994 WatchSource:0}: Error finding container fb4108ef28b345a839ced74b87486f897a9051d1512b431604401bf163f4a994: Status 404 returned error can't find the container with id fb4108ef28b345a839ced74b87486f897a9051d1512b431604401bf163f4a994 Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.207130 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.221787 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.895746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znqzx" event={"ID":"94a6d712-4bb0-458b-878a-99dd8d47a8f9","Type":"ContainerStarted","Data":"31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.897138 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e","Type":"ContainerStarted","Data":"12b99619f6a130a37cda13a3b81c297cbf2cb61bccfee475929ae11e3a54aebc"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.925155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2scr7" event={"ID":"00f464ea-7983-4ab2-b2b1-07bf67c76e31","Type":"ContainerStarted","Data":"f65ed53fce799899a0065acb405f0661507c6f062717606d4673d2c739f139c3"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.938259 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.939811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e6eb8aab-9b25-4861-972a-b100ba14ab24","Type":"ContainerStarted","Data":"dc10c820657929c180d82d4fff7abc6475c692868711b8fba830665b18d376a5"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.942132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn59h" event={"ID":"7f28060d-c759-4b4b-a643-bf8acb76c1b2","Type":"ContainerStarted","Data":"f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.947034 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerStarted","Data":"bbff41250991a8fa0633f1de99c96f3d870ef8a4ed736b3a21987ddb51fb612e"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.947058 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerStarted","Data":"5c3db8c3bc4daf4f8f41ec473631d85b1183d7c5a90e34780c4c96a535ca7a82"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.966700 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmfxp" event={"ID":"fee2c3d7-24fe-4966-878b-90147b8f5cfb","Type":"ContainerStarted","Data":"25420bf258570f8bfa0e1683097b68eb745bc3426ee0c7a15ed415e9b11b16a6"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.966740 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmfxp" event={"ID":"fee2c3d7-24fe-4966-878b-90147b8f5cfb","Type":"ContainerStarted","Data":"789b3f882c05d428259f945f97d432b83c0b8643a3ee6ba9f50a89f6e0f98d76"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.978908 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pl6mr" event={"ID":"3ef63ff9-e0db-4b73-881b-2ad904bcaac5","Type":"ContainerStarted","Data":"9cc46fb4cf6241ce5e4a1aba64180f40f862c74409f3f3db65256519557e593a"} Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.992195 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-lmfxp" podStartSLOduration=2.992178327 podStartE2EDuration="2.992178327s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:52:35.986940536 +0000 UTC m=+1466.520711299" watchObservedRunningTime="2025-12-06 05:52:35.992178327 +0000 UTC m=+1466.525949090" Dec 06 05:52:35 crc kubenswrapper[4958]: I1206 05:52:35.992574 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerStarted","Data":"3717b51df2466f483de49daaf9cc2f3a3d7114483e45f0f6178ca875ee96529c"} Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.042953 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerStarted","Data":"836ddca1b55286d5e654dca05ec6d2c671f6fa9665927997f762aeb29812d52a"} Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.043005 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerStarted","Data":"fb4108ef28b345a839ced74b87486f897a9051d1512b431604401bf163f4a994"} Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.054604 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.055727 4958 generic.go:334] "Generic (PLEG): container finished" podID="d2af10ec-6c9f-4eb8-9665-13449df417ab" containerID="d813f3f0ebc3cf3f6edd322021d74ac1b3c4d4f4892b640e67f5d345c7dec4ce" exitCode=0 Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.055797 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" event={"ID":"d2af10ec-6c9f-4eb8-9665-13449df417ab","Type":"ContainerDied","Data":"d813f3f0ebc3cf3f6edd322021d74ac1b3c4d4f4892b640e67f5d345c7dec4ce"} Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.060160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9664ab8f-eb78-4177-8847-54af6ae2fce5","Type":"ContainerStarted","Data":"1c24d751e159f00588939c1816d45dd8ec03dab3fbf8924f1534cf14f9ec15bf"} Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.065883 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-pl6mr" podStartSLOduration=3.065852588 podStartE2EDuration="3.065852588s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:52:36.011450135 +0000 UTC m=+1466.545220898" watchObservedRunningTime="2025-12-06 05:52:36.065852588 +0000 UTC m=+1466.599623351" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.103542 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.118412 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.125225 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.241165 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.280913 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.280976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.281094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.281157 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7vxz\" (UniqueName: \"kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.281183 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.402285 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.402330 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.402437 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.402513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7vxz\" (UniqueName: \"kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.402549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.403310 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.404224 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.404344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.427908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.445088 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7vxz\" (UniqueName: \"kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz\") pod \"horizon-78555484d5-rrzlr\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.562927 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.779117 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.920612 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.920726 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.920884 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.920927 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k26lm\" (UniqueName: \"kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.920952 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.921030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc\") pod \"d2af10ec-6c9f-4eb8-9665-13449df417ab\" (UID: \"d2af10ec-6c9f-4eb8-9665-13449df417ab\") " Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.930719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm" (OuterVolumeSpecName: "kube-api-access-k26lm") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "kube-api-access-k26lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.958802 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.969653 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:36 crc kubenswrapper[4958]: I1206 05:52:36.976976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.008712 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config" (OuterVolumeSpecName: "config") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.023274 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.023307 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.023319 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.023329 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.023340 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k26lm\" (UniqueName: \"kubernetes.io/projected/d2af10ec-6c9f-4eb8-9665-13449df417ab-kube-api-access-k26lm\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.024528 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2af10ec-6c9f-4eb8-9665-13449df417ab" (UID: "d2af10ec-6c9f-4eb8-9665-13449df417ab"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.077256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" event={"ID":"d2af10ec-6c9f-4eb8-9665-13449df417ab","Type":"ContainerDied","Data":"25f8fc7f4bd9b1509988a76673c8c38f51ce875974450475735ef45ac679b823"} Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.077315 4958 scope.go:117] "RemoveContainer" containerID="d813f3f0ebc3cf3f6edd322021d74ac1b3c4d4f4892b640e67f5d345c7dec4ce" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.077504 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58bbf48b7f-w8tj6" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.127206 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2af10ec-6c9f-4eb8-9665-13449df417ab-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.140213 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.159801 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58bbf48b7f-w8tj6"] Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.186278 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:52:37 crc kubenswrapper[4958]: I1206 05:52:37.777918 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2af10ec-6c9f-4eb8-9665-13449df417ab" path="/var/lib/kubelet/pods/d2af10ec-6c9f-4eb8-9665-13449df417ab/volumes" Dec 06 05:52:38 crc kubenswrapper[4958]: I1206 05:52:38.090372 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerStarted","Data":"eb7ad830e4dab564a8663e2cc19ec6e0d46bb2748e961f928d1f7344b5196d2e"} Dec 06 05:52:39 crc kubenswrapper[4958]: I1206 05:52:39.867804 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:52:39 crc kubenswrapper[4958]: I1206 05:52:39.868147 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:52:40 crc kubenswrapper[4958]: I1206 05:52:40.110369 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api-log" containerID="cri-o://5c3db8c3bc4daf4f8f41ec473631d85b1183d7c5a90e34780c4c96a535ca7a82" gracePeriod=30 Dec 06 05:52:40 crc kubenswrapper[4958]: I1206 05:52:40.110501 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" containerID="cri-o://bbff41250991a8fa0633f1de99c96f3d870ef8a4ed736b3a21987ddb51fb612e" gracePeriod=30 Dec 06 05:52:40 crc kubenswrapper[4958]: I1206 05:52:40.110681 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:52:40 crc kubenswrapper[4958]: I1206 05:52:40.125176 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": EOF" Dec 06 05:52:40 crc kubenswrapper[4958]: I1206 05:52:40.150356 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=7.150338776 podStartE2EDuration="7.150338776s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:52:40.145205058 +0000 UTC m=+1470.678975821" watchObservedRunningTime="2025-12-06 05:52:40.150338776 +0000 UTC m=+1470.684109539" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.070285 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.092644 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:52:42 crc kubenswrapper[4958]: E1206 05:52:42.093091 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2af10ec-6c9f-4eb8-9665-13449df417ab" containerName="init" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.093104 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2af10ec-6c9f-4eb8-9665-13449df417ab" containerName="init" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.093315 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2af10ec-6c9f-4eb8-9665-13449df417ab" containerName="init" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.094282 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.098132 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.125154 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.163890 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.177088 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-65548cc856-4tstl"] Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.178902 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.222672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65548cc856-4tstl"] Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276101 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276211 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276244 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xgp8\" (UniqueName: \"kubernetes.io/projected/44386224-0241-4c6e-b12f-a1bef3954fe3-kube-api-access-4xgp8\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276263 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-scripts\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276333 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276370 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-config-data\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276396 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-combined-ca-bundle\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276496 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2w7ln\" (UniqueName: \"kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-secret-key\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276639 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-tls-certs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276684 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.276706 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44386224-0241-4c6e-b12f-a1bef3954fe3-logs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378003 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378062 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-config-data\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378088 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-combined-ca-bundle\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378114 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2w7ln\" (UniqueName: \"kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378164 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378183 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-secret-key\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-tls-certs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378220 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378239 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44386224-0241-4c6e-b12f-a1bef3954fe3-logs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378259 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4xgp8\" (UniqueName: \"kubernetes.io/projected/44386224-0241-4c6e-b12f-a1bef3954fe3-kube-api-access-4xgp8\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.378331 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-scripts\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.379355 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-scripts\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.384258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/44386224-0241-4c6e-b12f-a1bef3954fe3-logs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.384966 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.385290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.386809 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-secret-key\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.387050 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/44386224-0241-4c6e-b12f-a1bef3954fe3-config-data\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.389524 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-horizon-tls-certs\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.389736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.390113 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.392017 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/44386224-0241-4c6e-b12f-a1bef3954fe3-combined-ca-bundle\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.398611 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.398986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.402099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2w7ln\" (UniqueName: \"kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln\") pod \"horizon-776ddc8896-9vdrs\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.403899 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4xgp8\" (UniqueName: \"kubernetes.io/projected/44386224-0241-4c6e-b12f-a1bef3954fe3-kube-api-access-4xgp8\") pod \"horizon-65548cc856-4tstl\" (UID: \"44386224-0241-4c6e-b12f-a1bef3954fe3\") " pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.433797 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:52:42 crc kubenswrapper[4958]: I1206 05:52:42.520883 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:52:43 crc kubenswrapper[4958]: I1206 05:52:43.541316 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:52:43 crc kubenswrapper[4958]: I1206 05:52:43.859889 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": read tcp 10.217.0.2:34212->10.217.0.147:9322: read: connection reset by peer" Dec 06 05:52:44 crc kubenswrapper[4958]: I1206 05:52:44.193719 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerID="5c3db8c3bc4daf4f8f41ec473631d85b1183d7c5a90e34780c4c96a535ca7a82" exitCode=143 Dec 06 05:52:44 crc kubenswrapper[4958]: I1206 05:52:44.194076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerDied","Data":"5c3db8c3bc4daf4f8f41ec473631d85b1183d7c5a90e34780c4c96a535ca7a82"} Dec 06 05:52:44 crc kubenswrapper[4958]: I1206 05:52:44.196178 4958 generic.go:334] "Generic (PLEG): container finished" podID="27d884d6-231d-4782-bb16-d2664b44e18f" containerID="836ddca1b55286d5e654dca05ec6d2c671f6fa9665927997f762aeb29812d52a" exitCode=0 Dec 06 05:52:44 crc kubenswrapper[4958]: I1206 05:52:44.196414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerDied","Data":"836ddca1b55286d5e654dca05ec6d2c671f6fa9665927997f762aeb29812d52a"} Dec 06 05:52:45 crc kubenswrapper[4958]: I1206 05:52:45.210277 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerID="bbff41250991a8fa0633f1de99c96f3d870ef8a4ed736b3a21987ddb51fb612e" exitCode=0 Dec 06 05:52:45 crc kubenswrapper[4958]: I1206 05:52:45.210330 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerDied","Data":"bbff41250991a8fa0633f1de99c96f3d870ef8a4ed736b3a21987ddb51fb612e"} Dec 06 05:52:48 crc kubenswrapper[4958]: I1206 05:52:48.241188 4958 generic.go:334] "Generic (PLEG): container finished" podID="3ef63ff9-e0db-4b73-881b-2ad904bcaac5" containerID="9cc46fb4cf6241ce5e4a1aba64180f40f862c74409f3f3db65256519557e593a" exitCode=0 Dec 06 05:52:48 crc kubenswrapper[4958]: I1206 05:52:48.241267 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pl6mr" event={"ID":"3ef63ff9-e0db-4b73-881b-2ad904bcaac5","Type":"ContainerDied","Data":"9cc46fb4cf6241ce5e4a1aba64180f40f862c74409f3f3db65256519557e593a"} Dec 06 05:52:48 crc kubenswrapper[4958]: I1206 05:52:48.542194 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": dial tcp 10.217.0.147:9322: connect: connection refused" Dec 06 05:52:51 crc kubenswrapper[4958]: E1206 05:52:51.050346 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 06 05:52:51 crc kubenswrapper[4958]: E1206 05:52:51.051019 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-horizon:current" Dec 06 05:52:51 crc kubenswrapper[4958]: E1206 05:52:51.051145 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.rdoproject.org/podified-master-centos10/openstack-horizon:current,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59dh649hf7hb5hf7h54h55ch567h56fh54fh56ch599hd9h649h5ddhffh5fdh89h5fh577h65ch556h5c6h549h554h548h5d6h66ch568hb9h55bh76q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:yes,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdmxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-6f7b4cd5bf-88npk_openstack(80bf896b-e5d4-4497-b10a-ae05d7be8886): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:52:51 crc kubenswrapper[4958]: E1206 05:52:51.053220 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-horizon:current\\\"\"]" pod="openstack/horizon-6f7b4cd5bf-88npk" podUID="80bf896b-e5d4-4497-b10a-ae05d7be8886" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.128938 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.247923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.247968 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7x7f\" (UniqueName: \"kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.248044 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.248100 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.248151 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.248197 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys\") pod \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\" (UID: \"3ef63ff9-e0db-4b73-881b-2ad904bcaac5\") " Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.256857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.259972 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.275227 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f" (OuterVolumeSpecName: "kube-api-access-v7x7f") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "kube-api-access-v7x7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.275667 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts" (OuterVolumeSpecName: "scripts") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.299234 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data" (OuterVolumeSpecName: "config-data") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.314610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-pl6mr" event={"ID":"3ef63ff9-e0db-4b73-881b-2ad904bcaac5","Type":"ContainerDied","Data":"630829540b25dc1ec5c38ea255b9f32dff0a8be9703a2293498b0ad375315ab6"} Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.314651 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="630829540b25dc1ec5c38ea255b9f32dff0a8be9703a2293498b0ad375315ab6" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.314704 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-pl6mr" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.321361 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ef63ff9-e0db-4b73-881b-2ad904bcaac5" (UID: "3ef63ff9-e0db-4b73-881b-2ad904bcaac5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351699 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351725 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7x7f\" (UniqueName: \"kubernetes.io/projected/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-kube-api-access-v7x7f\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351738 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351749 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351758 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:53 crc kubenswrapper[4958]: I1206 05:52:53.351766 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3ef63ff9-e0db-4b73-881b-2ad904bcaac5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.230551 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-pl6mr"] Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.240932 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-pl6mr"] Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.315140 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-wf7cs"] Dec 06 05:52:54 crc kubenswrapper[4958]: E1206 05:52:54.315548 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ef63ff9-e0db-4b73-881b-2ad904bcaac5" containerName="keystone-bootstrap" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.315571 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ef63ff9-e0db-4b73-881b-2ad904bcaac5" containerName="keystone-bootstrap" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.317590 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ef63ff9-e0db-4b73-881b-2ad904bcaac5" containerName="keystone-bootstrap" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.318327 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.320015 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.320709 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.320778 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.321882 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbqwc" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.322296 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.339959 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wf7cs"] Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.368694 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.368775 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwsfd\" (UniqueName: \"kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.368830 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.368896 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.369005 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.369057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472325 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472436 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwsfd\" (UniqueName: \"kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472525 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.472546 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.479782 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.480322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.480421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.484987 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.487825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.488719 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwsfd\" (UniqueName: \"kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd\") pod \"keystone-bootstrap-wf7cs\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:54 crc kubenswrapper[4958]: I1206 05:52:54.645334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:52:55 crc kubenswrapper[4958]: I1206 05:52:55.772919 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ef63ff9-e0db-4b73-881b-2ad904bcaac5" path="/var/lib/kubelet/pods/3ef63ff9-e0db-4b73-881b-2ad904bcaac5/volumes" Dec 06 05:52:58 crc kubenswrapper[4958]: I1206 05:52:58.542509 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:53:03 crc kubenswrapper[4958]: I1206 05:53:03.428082 4958 generic.go:334] "Generic (PLEG): container finished" podID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" containerID="4fb815c2a706f214a8537ed307b11c2a0bb67231931011937e5d016c3fe89571" exitCode=0 Dec 06 05:53:03 crc kubenswrapper[4958]: I1206 05:53:03.428565 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jdfnk" event={"ID":"9d1dc22d-53a9-4aee-989b-fc253cd276cd","Type":"ContainerDied","Data":"4fb815c2a706f214a8537ed307b11c2a0bb67231931011937e5d016c3fe89571"} Dec 06 05:53:03 crc kubenswrapper[4958]: I1206 05:53:03.542989 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.122451 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.138099 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157204 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs\") pod \"80bf896b-e5d4-4497-b10a-ae05d7be8886\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157304 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m79p\" (UniqueName: \"kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p\") pod \"d7e65e35-934e-4123-84a2-bc92b2b1213e\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157379 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdmxn\" (UniqueName: \"kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn\") pod \"80bf896b-e5d4-4497-b10a-ae05d7be8886\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157415 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key\") pod \"80bf896b-e5d4-4497-b10a-ae05d7be8886\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts\") pod \"80bf896b-e5d4-4497-b10a-ae05d7be8886\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157525 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs\") pod \"d7e65e35-934e-4123-84a2-bc92b2b1213e\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157603 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle\") pod \"d7e65e35-934e-4123-84a2-bc92b2b1213e\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca\") pod \"d7e65e35-934e-4123-84a2-bc92b2b1213e\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data\") pod \"d7e65e35-934e-4123-84a2-bc92b2b1213e\" (UID: \"d7e65e35-934e-4123-84a2-bc92b2b1213e\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.157959 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data\") pod \"80bf896b-e5d4-4497-b10a-ae05d7be8886\" (UID: \"80bf896b-e5d4-4497-b10a-ae05d7be8886\") " Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.158020 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs" (OuterVolumeSpecName: "logs") pod "d7e65e35-934e-4123-84a2-bc92b2b1213e" (UID: "d7e65e35-934e-4123-84a2-bc92b2b1213e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.158388 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs" (OuterVolumeSpecName: "logs") pod "80bf896b-e5d4-4497-b10a-ae05d7be8886" (UID: "80bf896b-e5d4-4497-b10a-ae05d7be8886"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.158548 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7e65e35-934e-4123-84a2-bc92b2b1213e-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.161365 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data" (OuterVolumeSpecName: "config-data") pod "80bf896b-e5d4-4497-b10a-ae05d7be8886" (UID: "80bf896b-e5d4-4497-b10a-ae05d7be8886"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.161556 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts" (OuterVolumeSpecName: "scripts") pod "80bf896b-e5d4-4497-b10a-ae05d7be8886" (UID: "80bf896b-e5d4-4497-b10a-ae05d7be8886"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.167938 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn" (OuterVolumeSpecName: "kube-api-access-xdmxn") pod "80bf896b-e5d4-4497-b10a-ae05d7be8886" (UID: "80bf896b-e5d4-4497-b10a-ae05d7be8886"). InnerVolumeSpecName "kube-api-access-xdmxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.169774 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p" (OuterVolumeSpecName: "kube-api-access-6m79p") pod "d7e65e35-934e-4123-84a2-bc92b2b1213e" (UID: "d7e65e35-934e-4123-84a2-bc92b2b1213e"). InnerVolumeSpecName "kube-api-access-6m79p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.170125 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "80bf896b-e5d4-4497-b10a-ae05d7be8886" (UID: "80bf896b-e5d4-4497-b10a-ae05d7be8886"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.188977 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7e65e35-934e-4123-84a2-bc92b2b1213e" (UID: "d7e65e35-934e-4123-84a2-bc92b2b1213e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.190626 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "d7e65e35-934e-4123-84a2-bc92b2b1213e" (UID: "d7e65e35-934e-4123-84a2-bc92b2b1213e"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.217588 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data" (OuterVolumeSpecName: "config-data") pod "d7e65e35-934e-4123-84a2-bc92b2b1213e" (UID: "d7e65e35-934e-4123-84a2-bc92b2b1213e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260018 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260055 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/80bf896b-e5d4-4497-b10a-ae05d7be8886-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260067 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m79p\" (UniqueName: \"kubernetes.io/projected/d7e65e35-934e-4123-84a2-bc92b2b1213e-kube-api-access-6m79p\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260078 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdmxn\" (UniqueName: \"kubernetes.io/projected/80bf896b-e5d4-4497-b10a-ae05d7be8886-kube-api-access-xdmxn\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260086 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/80bf896b-e5d4-4497-b10a-ae05d7be8886-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260094 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/80bf896b-e5d4-4497-b10a-ae05d7be8886-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260101 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260108 4958 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.260116 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7e65e35-934e-4123-84a2-bc92b2b1213e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.440167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"d7e65e35-934e-4123-84a2-bc92b2b1213e","Type":"ContainerDied","Data":"5732e5cd9fe9e4d91786387662205d3d788590ff147f11ca51bbb584c1f814f3"} Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.440213 4958 scope.go:117] "RemoveContainer" containerID="bbff41250991a8fa0633f1de99c96f3d870ef8a4ed736b3a21987ddb51fb612e" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.440318 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.444708 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-6f7b4cd5bf-88npk" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.444700 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-6f7b4cd5bf-88npk" event={"ID":"80bf896b-e5d4-4497-b10a-ae05d7be8886","Type":"ContainerDied","Data":"ce2c036823a3b6c0fa0f646bc6d9058db1f9745089a9c9b6f4f5611b7c3cccbc"} Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.478565 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.483319 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.513572 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:04 crc kubenswrapper[4958]: E1206 05:53:04.513992 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.514008 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" Dec 06 05:53:04 crc kubenswrapper[4958]: E1206 05:53:04.514034 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api-log" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.514040 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api-log" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.514223 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.514244 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api-log" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.515465 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.519789 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.535118 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.545247 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.556143 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-6f7b4cd5bf-88npk"] Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.565144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5sp2\" (UniqueName: \"kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.565345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.565527 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.565623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.565673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.667367 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5sp2\" (UniqueName: \"kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.667498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.667553 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.667599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.667632 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.668101 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.671522 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.671624 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.672921 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.683525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5sp2\" (UniqueName: \"kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2\") pod \"watcher-api-0\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " pod="openstack/watcher-api-0" Dec 06 05:53:04 crc kubenswrapper[4958]: I1206 05:53:04.842120 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:05 crc kubenswrapper[4958]: I1206 05:53:05.774221 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80bf896b-e5d4-4497-b10a-ae05d7be8886" path="/var/lib/kubelet/pods/80bf896b-e5d4-4497-b10a-ae05d7be8886/volumes" Dec 06 05:53:05 crc kubenswrapper[4958]: I1206 05:53:05.775266 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" path="/var/lib/kubelet/pods/d7e65e35-934e-4123-84a2-bc92b2b1213e/volumes" Dec 06 05:53:06 crc kubenswrapper[4958]: E1206 05:53:06.534839 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current" Dec 06 05:53:06 crc kubenswrapper[4958]: E1206 05:53:06.535274 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current" Dec 06 05:53:06 crc kubenswrapper[4958]: E1206 05:53:06.535428 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nchb5h658h547h558h5d9h567h697h568h645h584h567h54fh5d4h5ddhbbh8bh79h58fh5dbh678h565hc5h5f9h647h5b9hdhc4h575h588h58h77q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5tpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9664ab8f-eb78-4177-8847-54af6ae2fce5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:53:07 crc kubenswrapper[4958]: E1206 05:53:07.016671 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Dec 06 05:53:07 crc kubenswrapper[4958]: E1206 05:53:07.016722 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current" Dec 06 05:53:07 crc kubenswrapper[4958]: E1206 05:53:07.016825 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4q92,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-znqzx_openstack(94a6d712-4bb0-458b-878a-99dd8d47a8f9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:53:07 crc kubenswrapper[4958]: E1206 05:53:07.018037 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-znqzx" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" Dec 06 05:53:07 crc kubenswrapper[4958]: E1206 05:53:07.472443 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-barbican-api:current\\\"\"" pod="openstack/barbican-db-sync-znqzx" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" Dec 06 05:53:08 crc kubenswrapper[4958]: E1206 05:53:08.033255 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Dec 06 05:53:08 crc kubenswrapper[4958]: E1206 05:53:08.033516 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current" Dec 06 05:53:08 crc kubenswrapper[4958]: E1206 05:53:08.033678 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5kss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2scr7_openstack(00f464ea-7983-4ab2-b2b1-07bf67c76e31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:53:08 crc kubenswrapper[4958]: E1206 05:53:08.034942 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2scr7" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.039386 4958 scope.go:117] "RemoveContainer" containerID="5c3db8c3bc4daf4f8f41ec473631d85b1183d7c5a90e34780c4c96a535ca7a82" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.393992 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jdfnk" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.455592 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bptxz\" (UniqueName: \"kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz\") pod \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.455744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle\") pod \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.455839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data\") pod \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.455923 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data\") pod \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\" (UID: \"9d1dc22d-53a9-4aee-989b-fc253cd276cd\") " Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.481455 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "9d1dc22d-53a9-4aee-989b-fc253cd276cd" (UID: "9d1dc22d-53a9-4aee-989b-fc253cd276cd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.483054 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-jdfnk" event={"ID":"9d1dc22d-53a9-4aee-989b-fc253cd276cd","Type":"ContainerDied","Data":"b041d8d1f976e0e77673295062b199a4d5d240851a73287b6a987e4c451ebdd8"} Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.483097 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b041d8d1f976e0e77673295062b199a4d5d240851a73287b6a987e4c451ebdd8" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.483165 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-jdfnk" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.484704 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:53:08 crc kubenswrapper[4958]: E1206 05:53:08.486451 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cinder-api:current\\\"\"" pod="openstack/cinder-db-sync-2scr7" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.493285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz" (OuterVolumeSpecName: "kube-api-access-bptxz") pod "9d1dc22d-53a9-4aee-989b-fc253cd276cd" (UID: "9d1dc22d-53a9-4aee-989b-fc253cd276cd"). InnerVolumeSpecName "kube-api-access-bptxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.543554 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="d7e65e35-934e-4123-84a2-bc92b2b1213e" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.147:9322/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.558243 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.558288 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bptxz\" (UniqueName: \"kubernetes.io/projected/9d1dc22d-53a9-4aee-989b-fc253cd276cd-kube-api-access-bptxz\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.618008 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-65548cc856-4tstl"] Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.645669 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.654728 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d1dc22d-53a9-4aee-989b-fc253cd276cd" (UID: "9d1dc22d-53a9-4aee-989b-fc253cd276cd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.658549 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-wf7cs"] Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.659705 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.690343 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data" (OuterVolumeSpecName: "config-data") pod "9d1dc22d-53a9-4aee-989b-fc253cd276cd" (UID: "9d1dc22d-53a9-4aee-989b-fc253cd276cd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:08 crc kubenswrapper[4958]: I1206 05:53:08.761372 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d1dc22d-53a9-4aee-989b-fc253cd276cd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:09 crc kubenswrapper[4958]: W1206 05:53:09.258301 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode3d20216_bf3b_43e0_b212_e05057a211fd.slice/crio-407fad10dbaf6edd8dd67cd9c7398a121edd097d556954eaf2aa616524dda3eb WatchSource:0}: Error finding container 407fad10dbaf6edd8dd67cd9c7398a121edd097d556954eaf2aa616524dda3eb: Status 404 returned error can't find the container with id 407fad10dbaf6edd8dd67cd9c7398a121edd097d556954eaf2aa616524dda3eb Dec 06 05:53:09 crc kubenswrapper[4958]: W1206 05:53:09.269030 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod374b8326_0ba7_46d1_b438_85a5e865fdb5.slice/crio-f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb WatchSource:0}: Error finding container f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb: Status 404 returned error can't find the container with id f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.514795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wf7cs" event={"ID":"374b8326-0ba7-46d1-b438-85a5e865fdb5","Type":"ContainerStarted","Data":"f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.517828 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerStarted","Data":"9572a247ea6bac0429807b162b0651c8b121fb33827394f01ef140024ce77230"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.519278 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e6eb8aab-9b25-4861-972a-b100ba14ab24","Type":"ContainerStarted","Data":"75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.531403 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e","Type":"ContainerStarted","Data":"dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.534247 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerStarted","Data":"5328a4a9a68e5afce4ecd4faa286c8d8e048fc93ca84dcc2421e0f162e3f5396"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.545507 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=3.520494152 podStartE2EDuration="36.545488743s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:34.967211019 +0000 UTC m=+1465.500981782" lastFinishedPulling="2025-12-06 05:53:07.99220561 +0000 UTC m=+1498.525976373" observedRunningTime="2025-12-06 05:53:09.536851301 +0000 UTC m=+1500.070622094" watchObservedRunningTime="2025-12-06 05:53:09.545488743 +0000 UTC m=+1500.079259506" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.561821 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=3.494500022 podStartE2EDuration="36.561799801s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:34.915584501 +0000 UTC m=+1465.449355274" lastFinishedPulling="2025-12-06 05:53:07.98288429 +0000 UTC m=+1498.516655053" observedRunningTime="2025-12-06 05:53:09.55581574 +0000 UTC m=+1500.089586513" watchObservedRunningTime="2025-12-06 05:53:09.561799801 +0000 UTC m=+1500.095570574" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.569795 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerStarted","Data":"407fad10dbaf6edd8dd67cd9c7398a121edd097d556954eaf2aa616524dda3eb"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.572794 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerStarted","Data":"2a4aa79aa981947115a5ee287b840af4315aa2584406634bf87d4f6ae7f4971a"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.573665 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.587799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65548cc856-4tstl" event={"ID":"44386224-0241-4c6e-b12f-a1bef3954fe3","Type":"ContainerStarted","Data":"be5bcb1a191fa63e54621276d865d77f3f77845596d605feaa598d43c5ddd9c6"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.598134 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn59h" event={"ID":"7f28060d-c759-4b4b-a643-bf8acb76c1b2","Type":"ContainerStarted","Data":"7d2b63b1bf05453d97ce26f361e83bb04e992444900d0ab59480d44b3a6e6148"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.600149 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerStarted","Data":"ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40"} Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.623945 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-hn59h" podStartSLOduration=3.845427598 podStartE2EDuration="36.623915331s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:35.206339048 +0000 UTC m=+1465.740109801" lastFinishedPulling="2025-12-06 05:53:07.984826771 +0000 UTC m=+1498.518597534" observedRunningTime="2025-12-06 05:53:09.6186855 +0000 UTC m=+1500.152456273" watchObservedRunningTime="2025-12-06 05:53:09.623915331 +0000 UTC m=+1500.157686094" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.624517 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578598f949-w55ph" podStartSLOduration=36.624446955 podStartE2EDuration="36.624446955s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:09.597797929 +0000 UTC m=+1500.131568702" watchObservedRunningTime="2025-12-06 05:53:09.624446955 +0000 UTC m=+1500.158217718" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.867915 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.868300 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.910452 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.910503 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.911115 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.911174 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d" gracePeriod=600 Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.968072 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:53:09 crc kubenswrapper[4958]: E1206 05:53:09.968756 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" containerName="glance-db-sync" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.968775 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" containerName="glance-db-sync" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.968960 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" containerName="glance-db-sync" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.972363 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:09 crc kubenswrapper[4958]: I1206 05:53:09.995263 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.039991 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.040173 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.040195 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.040710 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvxmr\" (UniqueName: \"kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.040745 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.040773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.143300 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.143439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.145294 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.145351 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.145419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvxmr\" (UniqueName: \"kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.145532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.145633 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.146679 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.147081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.147352 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.148663 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.169597 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvxmr\" (UniqueName: \"kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr\") pod \"dnsmasq-dns-7cf77b4997-jq68d\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.256867 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.776828 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.778888 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.782968 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.783538 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.783713 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-wlmlr" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.786016 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.830453 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.862742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.862915 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29s8g\" (UniqueName: \"kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.862945 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.863013 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.863073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.863128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.863198 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.964736 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29s8g\" (UniqueName: \"kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965585 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.965820 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.967017 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.967210 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.970841 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.971364 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.975091 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:10 crc kubenswrapper[4958]: I1206 05:53:10.983609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29s8g\" (UniqueName: \"kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.027462 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.062903 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.064842 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.068812 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.106467 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.155922 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.169894 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.169942 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.169979 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.170002 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629zc\" (UniqueName: \"kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.170025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.170057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.170073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271433 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271524 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271545 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-629zc\" (UniqueName: \"kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271598 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.271715 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.272173 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.273295 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.282238 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.284070 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.286435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.333254 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.345395 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-629zc\" (UniqueName: \"kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc\") pod \"glance-default-internal-api-0\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.424747 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.832252 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578598f949-w55ph" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="dnsmasq-dns" containerID="cri-o://2a4aa79aa981947115a5ee287b840af4315aa2584406634bf87d4f6ae7f4971a" gracePeriod=10 Dec 06 05:53:11 crc kubenswrapper[4958]: I1206 05:53:11.845254 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerStarted","Data":"b52bdeff0ba5ec6fbff8f9d9b29a6789ec9d525cd0467d4464731411e88c6380"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.355041 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.459159 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:12 crc kubenswrapper[4958]: W1206 05:53:12.466027 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc10dac20_d290_48a1_86b2_9d4969ad3bfc.slice/crio-c174d082d74c2260c0993403c089ca3f04654c1f97fb3b60d7d86d7a9fce4c4c WatchSource:0}: Error finding container c174d082d74c2260c0993403c089ca3f04654c1f97fb3b60d7d86d7a9fce4c4c: Status 404 returned error can't find the container with id c174d082d74c2260c0993403c089ca3f04654c1f97fb3b60d7d86d7a9fce4c4c Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.846792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65548cc856-4tstl" event={"ID":"44386224-0241-4c6e-b12f-a1bef3954fe3","Type":"ContainerStarted","Data":"cfdc6ae5db27d2e4caa6f1df666ef86533eb78c3e93ad0ca97b46d1728a7df95"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.850743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wf7cs" event={"ID":"374b8326-0ba7-46d1-b438-85a5e865fdb5","Type":"ContainerStarted","Data":"7ca0ad2f546948a6763d4373471301556d03fad318057c21901da48975099837"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.856141 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d" exitCode=0 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.856191 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.856231 4958 scope.go:117] "RemoveContainer" containerID="302a14bf1e4711bf21e8ab7165ce5c1b79633fb07014ab098243520b48862bd0" Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.860021 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerStarted","Data":"81d7ed6a059aeb90254de3e4e79011f96c66bea1af0b0b0ba772f1dc58321085"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.866289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerStarted","Data":"405ef670079b4eafe25fe94736464410ed03a4183ece88651aa959e01c5d63ee"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.866465 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7f9bd7cc-jjrbv" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon-log" containerID="cri-o://ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40" gracePeriod=30 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.866764 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7b7f9bd7cc-jjrbv" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon" containerID="cri-o://405ef670079b4eafe25fe94736464410ed03a4183ece88651aa959e01c5d63ee" gracePeriod=30 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.869107 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-wf7cs" podStartSLOduration=18.869091494 podStartE2EDuration="18.869091494s" podCreationTimestamp="2025-12-06 05:52:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:12.867535471 +0000 UTC m=+1503.401306244" watchObservedRunningTime="2025-12-06 05:53:12.869091494 +0000 UTC m=+1503.402862257" Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.873976 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerStarted","Data":"8fe92a9e6cba02192f864527db0d76fc7168b9fc5fc1d2c627d232a575c8d7b3"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.877582 4958 generic.go:334] "Generic (PLEG): container finished" podID="27d884d6-231d-4782-bb16-d2664b44e18f" containerID="2a4aa79aa981947115a5ee287b840af4315aa2584406634bf87d4f6ae7f4971a" exitCode=0 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.877662 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerDied","Data":"2a4aa79aa981947115a5ee287b840af4315aa2584406634bf87d4f6ae7f4971a"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.879404 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9664ab8f-eb78-4177-8847-54af6ae2fce5","Type":"ContainerStarted","Data":"2dde5f6c9c5df2e54ec2f9bb92e38f692bf32d7fbcc3ff87afd8b2726499226e"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.880318 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerStarted","Data":"e0edff085149d8023b045f089005e1cfdf58737b3ac78591d7dcf01b7416750a"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.881942 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerStarted","Data":"c43241140ef3a35d7a40d8d751c98ad32c107ba11c8b0fa100ebea9c9919202b"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.882080 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78555484d5-rrzlr" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon-log" containerID="cri-o://5328a4a9a68e5afce4ecd4faa286c8d8e048fc93ca84dcc2421e0f162e3f5396" gracePeriod=30 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.882362 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-78555484d5-rrzlr" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon" containerID="cri-o://c43241140ef3a35d7a40d8d751c98ad32c107ba11c8b0fa100ebea9c9919202b" gracePeriod=30 Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.890150 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerStarted","Data":"df832c624dd77d9345ae9355d24eaaa0bdfbedb70e95b963907b7ae1722b9b0b"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.891405 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerStarted","Data":"c174d082d74c2260c0993403c089ca3f04654c1f97fb3b60d7d86d7a9fce4c4c"} Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.909275 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7b7f9bd7cc-jjrbv" podStartSLOduration=7.170164039 podStartE2EDuration="39.909254033s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:35.245039539 +0000 UTC m=+1465.778810292" lastFinishedPulling="2025-12-06 05:53:07.984129523 +0000 UTC m=+1498.517900286" observedRunningTime="2025-12-06 05:53:12.896817248 +0000 UTC m=+1503.430588011" watchObservedRunningTime="2025-12-06 05:53:12.909254033 +0000 UTC m=+1503.443024796" Dec 06 05:53:12 crc kubenswrapper[4958]: I1206 05:53:12.955031 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-78555484d5-rrzlr" podStartSLOduration=6.105529114 podStartE2EDuration="36.955011253s" podCreationTimestamp="2025-12-06 05:52:36 +0000 UTC" firstStartedPulling="2025-12-06 05:52:37.200767192 +0000 UTC m=+1467.734537955" lastFinishedPulling="2025-12-06 05:53:08.050249331 +0000 UTC m=+1498.584020094" observedRunningTime="2025-12-06 05:53:12.941596022 +0000 UTC m=+1503.475366785" watchObservedRunningTime="2025-12-06 05:53:12.955011253 +0000 UTC m=+1503.488782016" Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.514356 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.627959 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.897914 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.914072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerStarted","Data":"42d22673a243c1cc6a53c4d422e4407a3f07c62add34feecb8b192fbcd4d95cb"} Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.921643 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.921713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.944894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerStarted","Data":"08e7498b8f37c4b0d8355a05f46f246024c8592c6bb44fd04b33d270c68cf352"} Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.954069 4958 generic.go:334] "Generic (PLEG): container finished" podID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerID="8fe92a9e6cba02192f864527db0d76fc7168b9fc5fc1d2c627d232a575c8d7b3" exitCode=0 Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.954515 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerDied","Data":"8fe92a9e6cba02192f864527db0d76fc7168b9fc5fc1d2c627d232a575c8d7b3"} Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.963302 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-65548cc856-4tstl" event={"ID":"44386224-0241-4c6e-b12f-a1bef3954fe3","Type":"ContainerStarted","Data":"cb746e1abb06384a0b649bc0c7a82eb67513958ea6ce538a1898199c35e2db26"} Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.977075 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerStarted","Data":"d556580cdd68320cd4547cbaad7a9bce225d4cdc3102ebce0caa40aad1eb39dd"} Dec 06 05:53:13 crc kubenswrapper[4958]: I1206 05:53:13.978814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerStarted","Data":"1fec8fd26caf5d8cd3eadc088d908c1d5a504912b4bee79fbe6d57f81360ddde"} Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.005850 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.030094 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.060660 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-65548cc856-4tstl" podStartSLOduration=32.06063571 podStartE2EDuration="32.06063571s" podCreationTimestamp="2025-12-06 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:14.031767063 +0000 UTC m=+1504.565537816" watchObservedRunningTime="2025-12-06 05:53:14.06063571 +0000 UTC m=+1504.594406473" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.220791 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.268868 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.269036 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.269082 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x6bqr\" (UniqueName: \"kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.269121 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.269155 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.269234 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb\") pod \"27d884d6-231d-4782-bb16-d2664b44e18f\" (UID: \"27d884d6-231d-4782-bb16-d2664b44e18f\") " Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.277512 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr" (OuterVolumeSpecName: "kube-api-access-x6bqr") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "kube-api-access-x6bqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.295731 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.366016 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.383199 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x6bqr\" (UniqueName: \"kubernetes.io/projected/27d884d6-231d-4782-bb16-d2664b44e18f-kube-api-access-x6bqr\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.383236 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.386974 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.389840 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.435305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.443869 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config" (OuterVolumeSpecName: "config") pod "27d884d6-231d-4782-bb16-d2664b44e18f" (UID: "27d884d6-231d-4782-bb16-d2664b44e18f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.486333 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.486361 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.486371 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.486380 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/27d884d6-231d-4782-bb16-d2664b44e18f-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.989710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950"} Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.992637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578598f949-w55ph" event={"ID":"27d884d6-231d-4782-bb16-d2664b44e18f","Type":"ContainerDied","Data":"fb4108ef28b345a839ced74b87486f897a9051d1512b431604401bf163f4a994"} Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.992800 4958 scope.go:117] "RemoveContainer" containerID="2a4aa79aa981947115a5ee287b840af4315aa2584406634bf87d4f6ae7f4971a" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.993236 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578598f949-w55ph" Dec 06 05:53:14 crc kubenswrapper[4958]: I1206 05:53:14.993751 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.026975 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-776ddc8896-9vdrs" podStartSLOduration=33.026944531 podStartE2EDuration="33.026944531s" podCreationTimestamp="2025-12-06 05:52:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:15.014553447 +0000 UTC m=+1505.548324210" watchObservedRunningTime="2025-12-06 05:53:15.026944531 +0000 UTC m=+1505.560715304" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.028815 4958 scope.go:117] "RemoveContainer" containerID="836ddca1b55286d5e654dca05ec6d2c671f6fa9665927997f762aeb29812d52a" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.036509 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.051295 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.061977 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=11.061954442 podStartE2EDuration="11.061954442s" podCreationTimestamp="2025-12-06 05:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:15.041066 +0000 UTC m=+1505.574836773" watchObservedRunningTime="2025-12-06 05:53:15.061954442 +0000 UTC m=+1505.595725205" Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.122569 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.167191 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.176650 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.184608 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578598f949-w55ph"] Dec 06 05:53:15 crc kubenswrapper[4958]: I1206 05:53:15.776342 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" path="/var/lib/kubelet/pods/27d884d6-231d-4782-bb16-d2664b44e18f/volumes" Dec 06 05:53:16 crc kubenswrapper[4958]: I1206 05:53:16.004008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerStarted","Data":"d5e20771a393221a65f32ee148632e0b94aae3d999178667707009ae14db6c73"} Dec 06 05:53:16 crc kubenswrapper[4958]: I1206 05:53:16.005792 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerStarted","Data":"bf04e5020d79064b790d1b2631c5a2f6ce55738a858ddf71a3d041b75b7e9be0"} Dec 06 05:53:16 crc kubenswrapper[4958]: I1206 05:53:16.008098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerStarted","Data":"ab61d954e4ba23150e0e3e0a36f2406f94134a9e29045175c273bd95cf01c8f3"} Dec 06 05:53:16 crc kubenswrapper[4958]: I1206 05:53:16.563670 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.025126 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-log" containerID="cri-o://08e7498b8f37c4b0d8355a05f46f246024c8592c6bb44fd04b33d270c68cf352" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026040 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" containerID="cri-o://dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026282 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-log" containerID="cri-o://1fec8fd26caf5d8cd3eadc088d908c1d5a504912b4bee79fbe6d57f81360ddde" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026390 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-httpd" containerID="cri-o://d5e20771a393221a65f32ee148632e0b94aae3d999178667707009ae14db6c73" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026552 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerName="watcher-decision-engine" containerID="cri-o://75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026621 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-httpd" containerID="cri-o://bf04e5020d79064b790d1b2631c5a2f6ce55738a858ddf71a3d041b75b7e9be0" gracePeriod=30 Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.026668 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.062968 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.062953162 podStartE2EDuration="7.062953162s" podCreationTimestamp="2025-12-06 05:53:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:17.059812688 +0000 UTC m=+1507.593583451" watchObservedRunningTime="2025-12-06 05:53:17.062953162 +0000 UTC m=+1507.596723925" Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.102091 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" podStartSLOduration=8.102073314 podStartE2EDuration="8.102073314s" podCreationTimestamp="2025-12-06 05:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:17.0889144 +0000 UTC m=+1507.622685163" watchObservedRunningTime="2025-12-06 05:53:17.102073314 +0000 UTC m=+1507.635844077" Dec 06 05:53:17 crc kubenswrapper[4958]: I1206 05:53:17.124645 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.12462712 podStartE2EDuration="8.12462712s" podCreationTimestamp="2025-12-06 05:53:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:17.116739408 +0000 UTC m=+1507.650510171" watchObservedRunningTime="2025-12-06 05:53:17.12462712 +0000 UTC m=+1507.658397883" Dec 06 05:53:18 crc kubenswrapper[4958]: I1206 05:53:18.037382 4958 generic.go:334] "Generic (PLEG): container finished" podID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerID="08e7498b8f37c4b0d8355a05f46f246024c8592c6bb44fd04b33d270c68cf352" exitCode=143 Dec 06 05:53:18 crc kubenswrapper[4958]: I1206 05:53:18.037706 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerDied","Data":"08e7498b8f37c4b0d8355a05f46f246024c8592c6bb44fd04b33d270c68cf352"} Dec 06 05:53:18 crc kubenswrapper[4958]: I1206 05:53:18.039745 4958 generic.go:334] "Generic (PLEG): container finished" podID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerID="1fec8fd26caf5d8cd3eadc088d908c1d5a504912b4bee79fbe6d57f81360ddde" exitCode=143 Dec 06 05:53:18 crc kubenswrapper[4958]: I1206 05:53:18.039851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerDied","Data":"1fec8fd26caf5d8cd3eadc088d908c1d5a504912b4bee79fbe6d57f81360ddde"} Dec 06 05:53:18 crc kubenswrapper[4958]: E1206 05:53:18.920083 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:18 crc kubenswrapper[4958]: E1206 05:53:18.924684 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:18 crc kubenswrapper[4958]: E1206 05:53:18.928025 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:18 crc kubenswrapper[4958]: E1206 05:53:18.928074 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.051694 4958 generic.go:334] "Generic (PLEG): container finished" podID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerID="d5e20771a393221a65f32ee148632e0b94aae3d999178667707009ae14db6c73" exitCode=0 Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.051764 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerDied","Data":"d5e20771a393221a65f32ee148632e0b94aae3d999178667707009ae14db6c73"} Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.054208 4958 generic.go:334] "Generic (PLEG): container finished" podID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerID="bf04e5020d79064b790d1b2631c5a2f6ce55738a858ddf71a3d041b75b7e9be0" exitCode=0 Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.054249 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerDied","Data":"bf04e5020d79064b790d1b2631c5a2f6ce55738a858ddf71a3d041b75b7e9be0"} Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.215256 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-578598f949-w55ph" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.155:5353: i/o timeout" Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.843436 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:19 crc kubenswrapper[4958]: I1206 05:53:19.843513 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:20 crc kubenswrapper[4958]: I1206 05:53:20.258638 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:53:20 crc kubenswrapper[4958]: I1206 05:53:20.345550 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:53:20 crc kubenswrapper[4958]: I1206 05:53:20.345838 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" containerID="cri-o://c42009c782b2b103b1086d456f9a5f9bb0d1ca441436d8928baca766619079b1" gracePeriod=10 Dec 06 05:53:21 crc kubenswrapper[4958]: I1206 05:53:21.086747 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.434565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.435607 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.483743 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.521965 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.522598 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.944392 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:22 crc kubenswrapper[4958]: I1206 05:53:22.960955 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.086547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064","Type":"ContainerDied","Data":"e0edff085149d8023b045f089005e1cfdf58737b3ac78591d7dcf01b7416750a"} Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.086602 4958 scope.go:117] "RemoveContainer" containerID="d5e20771a393221a65f32ee148632e0b94aae3d999178667707009ae14db6c73" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.086747 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.093641 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.094130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"c10dac20-d290-48a1-86b2-9d4969ad3bfc","Type":"ContainerDied","Data":"c174d082d74c2260c0993403c089ca3f04654c1f97fb3b60d7d86d7a9fce4c4c"} Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102162 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-29s8g\" (UniqueName: \"kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102254 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102381 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102436 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102571 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102606 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-629zc\" (UniqueName: \"kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102642 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102759 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle\") pod \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\" (UID: \"c10dac20-d290-48a1-86b2-9d4969ad3bfc\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102787 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102807 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.102839 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts\") pod \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\" (UID: \"b9127bcb-1f38-4dcf-9fd2-0f93b5d40064\") " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.104823 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs" (OuterVolumeSpecName: "logs") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.104989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs" (OuterVolumeSpecName: "logs") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.105196 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.106498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.115791 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.124651 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g" (OuterVolumeSpecName: "kube-api-access-29s8g") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "kube-api-access-29s8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.124760 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts" (OuterVolumeSpecName: "scripts") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.124884 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts" (OuterVolumeSpecName: "scripts") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.129571 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc" (OuterVolumeSpecName: "kube-api-access-629zc") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "kube-api-access-629zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.132672 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.135905 4958 scope.go:117] "RemoveContainer" containerID="1fec8fd26caf5d8cd3eadc088d908c1d5a504912b4bee79fbe6d57f81360ddde" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.155714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.156504 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.177698 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data" (OuterVolumeSpecName: "config-data") pod "c10dac20-d290-48a1-86b2-9d4969ad3bfc" (UID: "c10dac20-d290-48a1-86b2-9d4969ad3bfc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.189885 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data" (OuterVolumeSpecName: "config-data") pod "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" (UID: "b9127bcb-1f38-4dcf-9fd2-0f93b5d40064"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205172 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-29s8g\" (UniqueName: \"kubernetes.io/projected/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-kube-api-access-29s8g\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205214 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205226 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205238 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205248 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205258 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c10dac20-d290-48a1-86b2-9d4969ad3bfc-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205268 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205278 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-629zc\" (UniqueName: \"kubernetes.io/projected/c10dac20-d290-48a1-86b2-9d4969ad3bfc-kube-api-access-629zc\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205289 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205300 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205312 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c10dac20-d290-48a1-86b2-9d4969ad3bfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205323 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205341 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.205354 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.262617 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.274715 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.275357 4958 scope.go:117] "RemoveContainer" containerID="bf04e5020d79064b790d1b2631c5a2f6ce55738a858ddf71a3d041b75b7e9be0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.307329 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.307737 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.322660 4958 scope.go:117] "RemoveContainer" containerID="08e7498b8f37c4b0d8355a05f46f246024c8592c6bb44fd04b33d270c68cf352" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.443391 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.457875 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.467445 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476490 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="dnsmasq-dns" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476524 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="dnsmasq-dns" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476538 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476544 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476565 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476571 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476581 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476587 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476605 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="init" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476611 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="init" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.476620 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476625 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476828 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476845 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476859 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-httpd" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476871 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" containerName="glance-log" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.476886 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="27d884d6-231d-4782-bb16-d2664b44e18f" containerName="dnsmasq-dns" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.477934 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.478020 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.490102 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.490387 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.548848 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.565119 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.583400 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.592462 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.596286 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.596696 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.616533 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.624393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625001 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625158 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625313 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5f2v\" (UniqueName: \"kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625547 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.625849 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.626230 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728328 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728392 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728427 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728457 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728515 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5f2v\" (UniqueName: \"kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728542 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728563 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728591 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728625 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728677 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728716 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxhb4\" (UniqueName: \"kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728782 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.728909 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.729830 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.733934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.734052 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.734329 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.743968 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.744412 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.747283 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.754157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5f2v\" (UniqueName: \"kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.768871 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.780399 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9127bcb-1f38-4dcf-9fd2-0f93b5d40064" path="/var/lib/kubelet/pods/b9127bcb-1f38-4dcf-9fd2-0f93b5d40064/volumes" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.781044 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c10dac20-d290-48a1-86b2-9d4969ad3bfc" path="/var/lib/kubelet/pods/c10dac20-d290-48a1-86b2-9d4969ad3bfc/volumes" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.811788 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.830796 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.830857 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.830940 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.830993 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.831028 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.831081 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.831120 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxhb4\" (UniqueName: \"kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.831162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.832112 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.832380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.833206 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.839374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.840755 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.842239 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.847055 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.864537 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxhb4\" (UniqueName: \"kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.877311 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: I1206 05:53:23.917720 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.924064 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.929878 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.931020 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:23 crc kubenswrapper[4958]: E1206 05:53:23.931132 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.117310 4958 generic.go:334] "Generic (PLEG): container finished" podID="d2b3678d-be78-4e2d-930a-866c6d404166" containerID="c42009c782b2b103b1086d456f9a5f9bb0d1ca441436d8928baca766619079b1" exitCode=0 Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.117527 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" event={"ID":"d2b3678d-be78-4e2d-930a-866c6d404166","Type":"ContainerDied","Data":"c42009c782b2b103b1086d456f9a5f9bb0d1ca441436d8928baca766619079b1"} Dec 06 05:53:24 crc kubenswrapper[4958]: W1206 05:53:24.557651 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8f111e9_7ea4_4c7a_935a_95c9a90fea92.slice/crio-159554576ed7267482742ca19d0d75d66bd9074bf28bad6d2bea25dcee7724ea WatchSource:0}: Error finding container 159554576ed7267482742ca19d0d75d66bd9074bf28bad6d2bea25dcee7724ea: Status 404 returned error can't find the container with id 159554576ed7267482742ca19d0d75d66bd9074bf28bad6d2bea25dcee7724ea Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.559513 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.649816 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.843433 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 06 05:53:24 crc kubenswrapper[4958]: I1206 05:53:24.849284 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 06 05:53:25 crc kubenswrapper[4958]: I1206 05:53:25.133130 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerStarted","Data":"b1edae5e9378f0ac18317b54923c67715fa6e979502af17034cc57a41e24db75"} Dec 06 05:53:25 crc kubenswrapper[4958]: I1206 05:53:25.134696 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerStarted","Data":"159554576ed7267482742ca19d0d75d66bd9074bf28bad6d2bea25dcee7724ea"} Dec 06 05:53:25 crc kubenswrapper[4958]: I1206 05:53:25.147392 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 05:53:26 crc kubenswrapper[4958]: I1206 05:53:26.086175 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: connect: connection refused" Dec 06 05:53:28 crc kubenswrapper[4958]: I1206 05:53:28.391968 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:28 crc kubenswrapper[4958]: I1206 05:53:28.392567 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" containerID="cri-o://81d7ed6a059aeb90254de3e4e79011f96c66bea1af0b0b0ba772f1dc58321085" gracePeriod=30 Dec 06 05:53:28 crc kubenswrapper[4958]: I1206 05:53:28.393153 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" containerID="cri-o://d556580cdd68320cd4547cbaad7a9bce225d4cdc3102ebce0caa40aad1eb39dd" gracePeriod=30 Dec 06 05:53:28 crc kubenswrapper[4958]: E1206 05:53:28.925965 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:28 crc kubenswrapper[4958]: E1206 05:53:28.928829 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:28 crc kubenswrapper[4958]: E1206 05:53:28.932530 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:28 crc kubenswrapper[4958]: E1206 05:53:28.932757 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.106774 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.108505 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.124150 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.244197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.244269 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d65rj\" (UniqueName: \"kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.244415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.346581 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.346665 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d65rj\" (UniqueName: \"kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.346712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.347233 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.347367 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.368677 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d65rj\" (UniqueName: \"kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj\") pod \"redhat-operators-btkdc\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.424593 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.843111 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:29 crc kubenswrapper[4958]: I1206 05:53:29.843173 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.899661 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.901422 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.902702 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.902745 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerName="watcher-decision-engine" Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.920260 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.922095 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.923326 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:33 crc kubenswrapper[4958]: E1206 05:53:33.923376 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.219457 4958 generic.go:334] "Generic (PLEG): container finished" podID="e1a95265-5489-4db4-a45a-a17761dd8477" containerID="81d7ed6a059aeb90254de3e4e79011f96c66bea1af0b0b0ba772f1dc58321085" exitCode=143 Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.219522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerDied","Data":"81d7ed6a059aeb90254de3e4e79011f96c66bea1af0b0b0ba772f1dc58321085"} Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.291602 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.403175 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.842646 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:34 crc kubenswrapper[4958]: I1206 05:53:34.842763 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:35 crc kubenswrapper[4958]: I1206 05:53:35.968963 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.084713 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-65548cc856-4tstl" Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.086544 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.086987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.174419 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.237429 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-776ddc8896-9vdrs" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon-log" containerID="cri-o://df832c624dd77d9345ae9355d24eaaa0bdfbedb70e95b963907b7ae1722b9b0b" gracePeriod=30 Dec 06 05:53:36 crc kubenswrapper[4958]: I1206 05:53:36.237617 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-776ddc8896-9vdrs" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" containerID="cri-o://42d22673a243c1cc6a53c4d422e4407a3f07c62add34feecb8b192fbcd4d95cb" gracePeriod=30 Dec 06 05:53:37 crc kubenswrapper[4958]: I1206 05:53:37.247551 4958 generic.go:334] "Generic (PLEG): container finished" podID="e1a95265-5489-4db4-a45a-a17761dd8477" containerID="d556580cdd68320cd4547cbaad7a9bce225d4cdc3102ebce0caa40aad1eb39dd" exitCode=0 Dec 06 05:53:37 crc kubenswrapper[4958]: I1206 05:53:37.247638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerDied","Data":"d556580cdd68320cd4547cbaad7a9bce225d4cdc3102ebce0caa40aad1eb39dd"} Dec 06 05:53:38 crc kubenswrapper[4958]: E1206 05:53:38.919369 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:38 crc kubenswrapper[4958]: E1206 05:53:38.920852 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:38 crc kubenswrapper[4958]: E1206 05:53:38.922866 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:38 crc kubenswrapper[4958]: E1206 05:53:38.922928 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:39 crc kubenswrapper[4958]: I1206 05:53:39.843373 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:39 crc kubenswrapper[4958]: I1206 05:53:39.843373 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:39 crc kubenswrapper[4958]: I1206 05:53:39.843896 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:39 crc kubenswrapper[4958]: I1206 05:53:39.843949 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:41 crc kubenswrapper[4958]: I1206 05:53:41.087356 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Dec 06 05:53:41 crc kubenswrapper[4958]: I1206 05:53:41.286392 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerStarted","Data":"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77"} Dec 06 05:53:41 crc kubenswrapper[4958]: I1206 05:53:41.288792 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerID="42d22673a243c1cc6a53c4d422e4407a3f07c62add34feecb8b192fbcd4d95cb" exitCode=0 Dec 06 05:53:41 crc kubenswrapper[4958]: I1206 05:53:41.288826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerDied","Data":"42d22673a243c1cc6a53c4d422e4407a3f07c62add34feecb8b192fbcd4d95cb"} Dec 06 05:53:42 crc kubenswrapper[4958]: I1206 05:53:42.435592 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-776ddc8896-9vdrs" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Dec 06 05:53:42 crc kubenswrapper[4958]: E1206 05:53:42.927987 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5dfff6bd_6266_4c2b_87d4_b800bd9bbc51.slice/crio-ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:53:43 crc kubenswrapper[4958]: E1206 05:53:43.919917 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:43 crc kubenswrapper[4958]: E1206 05:53:43.923805 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:43 crc kubenswrapper[4958]: E1206 05:53:43.926025 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:43 crc kubenswrapper[4958]: E1206 05:53:43.926080 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.320329 4958 generic.go:334] "Generic (PLEG): container finished" podID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerID="405ef670079b4eafe25fe94736464410ed03a4183ece88651aa959e01c5d63ee" exitCode=137 Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.320367 4958 generic.go:334] "Generic (PLEG): container finished" podID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerID="ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40" exitCode=137 Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.320435 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerDied","Data":"405ef670079b4eafe25fe94736464410ed03a4183ece88651aa959e01c5d63ee"} Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.320511 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerDied","Data":"ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40"} Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.322831 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerID="c43241140ef3a35d7a40d8d751c98ad32c107ba11c8b0fa100ebea9c9919202b" exitCode=137 Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.322850 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerID="5328a4a9a68e5afce4ecd4faa286c8d8e048fc93ca84dcc2421e0f162e3f5396" exitCode=137 Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.322897 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerDied","Data":"c43241140ef3a35d7a40d8d751c98ad32c107ba11c8b0fa100ebea9c9919202b"} Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.322930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerDied","Data":"5328a4a9a68e5afce4ecd4faa286c8d8e048fc93ca84dcc2421e0f162e3f5396"} Dec 06 05:53:44 crc kubenswrapper[4958]: E1206 05:53:44.634083 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1" Dec 06 05:53:44 crc kubenswrapper[4958]: E1206 05:53:44.634562 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core@sha256:09b5017c95d7697e66b9c64846bc48ef5826a009cba89b956ec54561e5f4a2d1,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5tpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9664ab8f-eb78-4177-8847-54af6ae2fce5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.843650 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.843928 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/watcher-api-0" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" probeResult="failure" output="Get \"http://10.217.0.162:9322/\": dial tcp 10.217.0.162:9322: connect: connection refused" Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.896600 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:53:44 crc kubenswrapper[4958]: I1206 05:53:44.963076 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.056297 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.066918 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qtdt\" (UniqueName: \"kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.066994 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.067097 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.067133 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.067171 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb\") pod \"d2b3678d-be78-4e2d-930a-866c6d404166\" (UID: \"d2b3678d-be78-4e2d-930a-866c6d404166\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.067656 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.072976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt" (OuterVolumeSpecName: "kube-api-access-7qtdt") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "kube-api-access-7qtdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.160111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.169450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.170689 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.170758 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config" (OuterVolumeSpecName: "config") pod "d2b3678d-be78-4e2d-930a-866c6d404166" (UID: "d2b3678d-be78-4e2d-930a-866c6d404166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.173856 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7qtdt\" (UniqueName: \"kubernetes.io/projected/d2b3678d-be78-4e2d-930a-866c6d404166-kube-api-access-7qtdt\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.176707 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.176867 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.177012 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.177098 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d2b3678d-be78-4e2d-930a-866c6d404166-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.336092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znqzx" event={"ID":"94a6d712-4bb0-458b-878a-99dd8d47a8f9","Type":"ContainerStarted","Data":"cddddc08f5e105cd63270d231de952de1e1f69bce1a7d233bef7a08f7d99bec4"} Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.337876 4958 generic.go:334] "Generic (PLEG): container finished" podID="374b8326-0ba7-46d1-b438-85a5e865fdb5" containerID="7ca0ad2f546948a6763d4373471301556d03fad318057c21901da48975099837" exitCode=0 Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.337930 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wf7cs" event={"ID":"374b8326-0ba7-46d1-b438-85a5e865fdb5","Type":"ContainerDied","Data":"7ca0ad2f546948a6763d4373471301556d03fad318057c21901da48975099837"} Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.343289 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" event={"ID":"d2b3678d-be78-4e2d-930a-866c6d404166","Type":"ContainerDied","Data":"0fd4f3bed645166ebaf5417e14ae1f62dadbffa0e4d05f2195645b7567bc7b1e"} Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.343338 4958 scope.go:117] "RemoveContainer" containerID="c42009c782b2b103b1086d456f9a5f9bb0d1ca441436d8928baca766619079b1" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.343449 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.343963 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.363003 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-znqzx" podStartSLOduration=2.459013632 podStartE2EDuration="1m12.362982754s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:35.093151386 +0000 UTC m=+1465.626922149" lastFinishedPulling="2025-12-06 05:53:44.997120508 +0000 UTC m=+1535.530891271" observedRunningTime="2025-12-06 05:53:45.356305565 +0000 UTC m=+1535.890076338" watchObservedRunningTime="2025-12-06 05:53:45.362982754 +0000 UTC m=+1535.896753507" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.441353 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.449690 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55b99bf79c-kcrvt"] Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.474144 4958 scope.go:117] "RemoveContainer" containerID="0fc04b32a3f19c540ccab940359d82f28d1357937cc29f79c5973795b691607f" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487134 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z7vxz\" (UniqueName: \"kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz\") pod \"5a21d87f-33a1-4e86-859f-03a2bace9908\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487167 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data\") pod \"5a21d87f-33a1-4e86-859f-03a2bace9908\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key\") pod \"5a21d87f-33a1-4e86-859f-03a2bace9908\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487222 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs\") pod \"5a21d87f-33a1-4e86-859f-03a2bace9908\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts\") pod \"5a21d87f-33a1-4e86-859f-03a2bace9908\" (UID: \"5a21d87f-33a1-4e86-859f-03a2bace9908\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.487831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs" (OuterVolumeSpecName: "logs") pod "5a21d87f-33a1-4e86-859f-03a2bace9908" (UID: "5a21d87f-33a1-4e86-859f-03a2bace9908"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.488587 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5a21d87f-33a1-4e86-859f-03a2bace9908-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.492041 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5a21d87f-33a1-4e86-859f-03a2bace9908" (UID: "5a21d87f-33a1-4e86-859f-03a2bace9908"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.492719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz" (OuterVolumeSpecName: "kube-api-access-z7vxz") pod "5a21d87f-33a1-4e86-859f-03a2bace9908" (UID: "5a21d87f-33a1-4e86-859f-03a2bace9908"). InnerVolumeSpecName "kube-api-access-z7vxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.510332 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data" (OuterVolumeSpecName: "config-data") pod "5a21d87f-33a1-4e86-859f-03a2bace9908" (UID: "5a21d87f-33a1-4e86-859f-03a2bace9908"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.518066 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.521147 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts" (OuterVolumeSpecName: "scripts") pod "5a21d87f-33a1-4e86-859f-03a2bace9908" (UID: "5a21d87f-33a1-4e86-859f-03a2bace9908"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.560535 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.600110 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.600146 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5a21d87f-33a1-4e86-859f-03a2bace9908-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.600160 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z7vxz\" (UniqueName: \"kubernetes.io/projected/5a21d87f-33a1-4e86-859f-03a2bace9908-kube-api-access-z7vxz\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.600175 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5a21d87f-33a1-4e86-859f-03a2bace9908-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.615923 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:53:45 crc kubenswrapper[4958]: W1206 05:53:45.639630 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d76c67_52ad_480c_9f07_9620f6ed6a42.slice/crio-4ee00a71eeacd8af9617dc193def6755d2f629da4c983d9f7ebc7e96a96623db WatchSource:0}: Error finding container 4ee00a71eeacd8af9617dc193def6755d2f629da4c983d9f7ebc7e96a96623db: Status 404 returned error can't find the container with id 4ee00a71eeacd8af9617dc193def6755d2f629da4c983d9f7ebc7e96a96623db Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701357 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5sp2\" (UniqueName: \"kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2\") pod \"e1a95265-5489-4db4-a45a-a17761dd8477\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701586 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data\") pod \"e1a95265-5489-4db4-a45a-a17761dd8477\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701652 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs\") pod \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701716 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts\") pod \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701734 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle\") pod \"e1a95265-5489-4db4-a45a-a17761dd8477\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701756 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data\") pod \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701783 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key\") pod \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701826 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca\") pod \"e1a95265-5489-4db4-a45a-a17761dd8477\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs\") pod \"e1a95265-5489-4db4-a45a-a17761dd8477\" (UID: \"e1a95265-5489-4db4-a45a-a17761dd8477\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.701890 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhrvv\" (UniqueName: \"kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv\") pod \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\" (UID: \"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51\") " Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.702212 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs" (OuterVolumeSpecName: "logs") pod "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" (UID: "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.702612 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs" (OuterVolumeSpecName: "logs") pod "e1a95265-5489-4db4-a45a-a17761dd8477" (UID: "e1a95265-5489-4db4-a45a-a17761dd8477"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.719211 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" (UID: "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.719773 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv" (OuterVolumeSpecName: "kube-api-access-vhrvv") pod "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" (UID: "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51"). InnerVolumeSpecName "kube-api-access-vhrvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.719999 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2" (OuterVolumeSpecName: "kube-api-access-n5sp2") pod "e1a95265-5489-4db4-a45a-a17761dd8477" (UID: "e1a95265-5489-4db4-a45a-a17761dd8477"). InnerVolumeSpecName "kube-api-access-n5sp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.748785 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e1a95265-5489-4db4-a45a-a17761dd8477" (UID: "e1a95265-5489-4db4-a45a-a17761dd8477"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.753292 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data" (OuterVolumeSpecName: "config-data") pod "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" (UID: "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.757432 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts" (OuterVolumeSpecName: "scripts") pod "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" (UID: "5dfff6bd-6266-4c2b-87d4-b800bd9bbc51"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.763195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e1a95265-5489-4db4-a45a-a17761dd8477" (UID: "e1a95265-5489-4db4-a45a-a17761dd8477"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.773693 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" path="/var/lib/kubelet/pods/d2b3678d-be78-4e2d-930a-866c6d404166/volumes" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.800393 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data" (OuterVolumeSpecName: "config-data") pod "e1a95265-5489-4db4-a45a-a17761dd8477" (UID: "e1a95265-5489-4db4-a45a-a17761dd8477"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804451 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804502 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804515 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804530 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804540 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804552 4958 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804564 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e1a95265-5489-4db4-a45a-a17761dd8477-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804576 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vhrvv\" (UniqueName: \"kubernetes.io/projected/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51-kube-api-access-vhrvv\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804589 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5sp2\" (UniqueName: \"kubernetes.io/projected/e1a95265-5489-4db4-a45a-a17761dd8477-kube-api-access-n5sp2\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:45 crc kubenswrapper[4958]: I1206 05:53:45.804602 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e1a95265-5489-4db4-a45a-a17761dd8477-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.088903 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-55b99bf79c-kcrvt" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.144:5353: i/o timeout" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.362415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"e1a95265-5489-4db4-a45a-a17761dd8477","Type":"ContainerDied","Data":"9572a247ea6bac0429807b162b0651c8b121fb33827394f01ef140024ce77230"} Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.362433 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.362489 4958 scope.go:117] "RemoveContainer" containerID="d556580cdd68320cd4547cbaad7a9bce225d4cdc3102ebce0caa40aad1eb39dd" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.364585 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78555484d5-rrzlr" event={"ID":"5a21d87f-33a1-4e86-859f-03a2bace9908","Type":"ContainerDied","Data":"eb7ad830e4dab564a8663e2cc19ec6e0d46bb2748e961f928d1f7344b5196d2e"} Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.364634 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78555484d5-rrzlr" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.368155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerStarted","Data":"4ee00a71eeacd8af9617dc193def6755d2f629da4c983d9f7ebc7e96a96623db"} Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.373288 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7b7f9bd7cc-jjrbv" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.373287 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7b7f9bd7cc-jjrbv" event={"ID":"5dfff6bd-6266-4c2b-87d4-b800bd9bbc51","Type":"ContainerDied","Data":"3717b51df2466f483de49daaf9cc2f3a3d7114483e45f0f6178ca875ee96529c"} Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.393676 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.399412 4958 scope.go:117] "RemoveContainer" containerID="81d7ed6a059aeb90254de3e4e79011f96c66bea1af0b0b0ba772f1dc58321085" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.403052 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78555484d5-rrzlr"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.416040 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.424110 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7b7f9bd7cc-jjrbv"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.432458 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.441514 4958 scope.go:117] "RemoveContainer" containerID="c43241140ef3a35d7a40d8d751c98ad32c107ba11c8b0fa100ebea9c9919202b" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.442275 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.455668 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456100 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456122 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456139 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456147 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456163 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456171 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456194 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456200 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456221 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456228 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456246 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456254 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456272 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="init" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456279 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="init" Dec 06 05:53:46 crc kubenswrapper[4958]: E1206 05:53:46.456290 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456299 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456523 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456537 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456550 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456566 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b3678d-be78-4e2d-930a-866c6d404166" containerName="dnsmasq-dns" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456578 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" containerName="horizon" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456598 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" containerName="horizon-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.456608 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" containerName="watcher-api-log" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.481688 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.481811 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.494158 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-internal-svc" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.494418 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-api-config-data" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.494492 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-watcher-public-svc" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.623708 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/859ad21b-442c-4e81-991c-fff351e6f635-logs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.623811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-public-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.623881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.623936 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.624094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-config-data\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.624143 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h2vb\" (UniqueName: \"kubernetes.io/projected/859ad21b-442c-4e81-991c-fff351e6f635-kube-api-access-5h2vb\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.624246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.687694 4958 scope.go:117] "RemoveContainer" containerID="5328a4a9a68e5afce4ecd4faa286c8d8e048fc93ca84dcc2421e0f162e3f5396" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.706457 4958 scope.go:117] "RemoveContainer" containerID="405ef670079b4eafe25fe94736464410ed03a4183ece88651aa959e01c5d63ee" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726290 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/859ad21b-442c-4e81-991c-fff351e6f635-logs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726387 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-public-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726417 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726445 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726494 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-config-data\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726512 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h2vb\" (UniqueName: \"kubernetes.io/projected/859ad21b-442c-4e81-991c-fff351e6f635-kube-api-access-5h2vb\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.726730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/859ad21b-442c-4e81-991c-fff351e6f635-logs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.731697 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-internal-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.732520 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-public-tls-certs\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.733242 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-custom-prometheus-ca\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.733607 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-combined-ca-bundle\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.733818 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/859ad21b-442c-4e81-991c-fff351e6f635-config-data\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.745154 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h2vb\" (UniqueName: \"kubernetes.io/projected/859ad21b-442c-4e81-991c-fff351e6f635-kube-api-access-5h2vb\") pod \"watcher-api-0\" (UID: \"859ad21b-442c-4e81-991c-fff351e6f635\") " pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.822017 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-api-0" Dec 06 05:53:46 crc kubenswrapper[4958]: I1206 05:53:46.881427 4958 scope.go:117] "RemoveContainer" containerID="ddb3fb6c1bb5d5ad6bfec1eea50c184f974552ba6daaee55ca00d66a765cec40" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.024455 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141527 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141652 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141693 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwsfd\" (UniqueName: \"kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141834 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.141964 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle\") pod \"374b8326-0ba7-46d1-b438-85a5e865fdb5\" (UID: \"374b8326-0ba7-46d1-b438-85a5e865fdb5\") " Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.150069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts" (OuterVolumeSpecName: "scripts") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.151601 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd" (OuterVolumeSpecName: "kube-api-access-gwsfd") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "kube-api-access-gwsfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.151618 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.153134 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.171917 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data" (OuterVolumeSpecName: "config-data") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.174590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "374b8326-0ba7-46d1-b438-85a5e865fdb5" (UID: "374b8326-0ba7-46d1-b438-85a5e865fdb5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246530 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246689 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246707 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246722 4958 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246733 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/374b8326-0ba7-46d1-b438-85a5e865fdb5-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.246769 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwsfd\" (UniqueName: \"kubernetes.io/projected/374b8326-0ba7-46d1-b438-85a5e865fdb5-kube-api-access-gwsfd\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.396543 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-wf7cs" event={"ID":"374b8326-0ba7-46d1-b438-85a5e865fdb5","Type":"ContainerDied","Data":"f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb"} Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.396605 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7ff93b5e1a100e7422acaf49e5fb9bc2b48c566ee69222d491ed2536ec7afdb" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.396679 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-wf7cs" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.422828 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-api-0"] Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.491555 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cdf79dfd7-5h4gn"] Dec 06 05:53:47 crc kubenswrapper[4958]: E1206 05:53:47.492027 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="374b8326-0ba7-46d1-b438-85a5e865fdb5" containerName="keystone-bootstrap" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.492048 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="374b8326-0ba7-46d1-b438-85a5e865fdb5" containerName="keystone-bootstrap" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.492232 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="374b8326-0ba7-46d1-b438-85a5e865fdb5" containerName="keystone-bootstrap" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.492927 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.496596 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.496773 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.496938 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-mbqwc" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.497060 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.497103 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.497152 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.499411 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cdf79dfd7-5h4gn"] Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652129 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-internal-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8nrq\" (UniqueName: \"kubernetes.io/projected/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-kube-api-access-b8nrq\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652606 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-public-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652635 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-credential-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652664 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-scripts\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652804 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-combined-ca-bundle\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652838 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-fernet-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.652885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-config-data\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.755825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-combined-ca-bundle\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.755880 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-fernet-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.755943 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-config-data\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.756001 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-internal-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.756054 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8nrq\" (UniqueName: \"kubernetes.io/projected/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-kube-api-access-b8nrq\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.756100 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-public-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.756126 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-credential-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.756152 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-scripts\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.760932 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-public-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.761379 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-config-data\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.761455 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-credential-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.763041 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-fernet-keys\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.765651 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-combined-ca-bundle\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.776097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-scripts\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.777872 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a21d87f-33a1-4e86-859f-03a2bace9908" path="/var/lib/kubelet/pods/5a21d87f-33a1-4e86-859f-03a2bace9908/volumes" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.778627 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfff6bd-6266-4c2b-87d4-b800bd9bbc51" path="/var/lib/kubelet/pods/5dfff6bd-6266-4c2b-87d4-b800bd9bbc51/volumes" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.778722 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-internal-tls-certs\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.779202 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1a95265-5489-4db4-a45a-a17761dd8477" path="/var/lib/kubelet/pods/e1a95265-5489-4db4-a45a-a17761dd8477/volumes" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.788116 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8nrq\" (UniqueName: \"kubernetes.io/projected/170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a-kube-api-access-b8nrq\") pod \"keystone-cdf79dfd7-5h4gn\" (UID: \"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a\") " pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:47 crc kubenswrapper[4958]: I1206 05:53:47.847051 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.283142 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cdf79dfd7-5h4gn"] Dec 06 05:53:48 crc kubenswrapper[4958]: W1206 05:53:48.285064 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod170ad5c2_4dbe_4cec_bd99_b6c3655a5d6a.slice/crio-6e03b5bf19aefdd3e2c4add6d6235be451f54d610e87a55dbce49dcac9cd0295 WatchSource:0}: Error finding container 6e03b5bf19aefdd3e2c4add6d6235be451f54d610e87a55dbce49dcac9cd0295: Status 404 returned error can't find the container with id 6e03b5bf19aefdd3e2c4add6d6235be451f54d610e87a55dbce49dcac9cd0295 Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.411045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerStarted","Data":"3e50a49fc0a13e1f97ab9ad64a5ac0e8effd26e04f5b8b3c08455857413e0efc"} Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.413884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cdf79dfd7-5h4gn" event={"ID":"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a","Type":"ContainerStarted","Data":"6e03b5bf19aefdd3e2c4add6d6235be451f54d610e87a55dbce49dcac9cd0295"} Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.415990 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerStarted","Data":"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45"} Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.417617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerStarted","Data":"9c1397d07a3e63cbad57931dc557ad164fe1616cbc9f246ee594865fee46f07f"} Dec 06 05:53:48 crc kubenswrapper[4958]: I1206 05:53:48.419162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"859ad21b-442c-4e81-991c-fff351e6f635","Type":"ContainerStarted","Data":"60d7fe0c24a654fb1e44cd54fe7d585ab9311cc4086f05bdff7f0306c5fd8fe1"} Dec 06 05:53:48 crc kubenswrapper[4958]: E1206 05:53:48.918027 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4 is running failed: container process not found" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:48 crc kubenswrapper[4958]: E1206 05:53:48.918527 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4 is running failed: container process not found" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:48 crc kubenswrapper[4958]: E1206 05:53:48.919006 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4 is running failed: container process not found" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" cmd=["/usr/bin/pgrep","-r","DRST","watcher-applier"] Dec 06 05:53:48 crc kubenswrapper[4958]: E1206 05:53:48.919042 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4 is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-applier-0" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.430779 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerID="9c1397d07a3e63cbad57931dc557ad164fe1616cbc9f246ee594865fee46f07f" exitCode=0 Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.430833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerDied","Data":"9c1397d07a3e63cbad57931dc557ad164fe1616cbc9f246ee594865fee46f07f"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.434070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"859ad21b-442c-4e81-991c-fff351e6f635","Type":"ContainerStarted","Data":"67e048154b813fde4e3d3cd6c3c4885380c34762e179b222112bf0a090683ed2"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.437663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2scr7" event={"ID":"00f464ea-7983-4ab2-b2b1-07bf67c76e31","Type":"ContainerStarted","Data":"0ee33f13d8a587dadbec3566136aeffe22163de9bdcb7be217d5121e270b9643"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.440071 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cdf79dfd7-5h4gn" event={"ID":"170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a","Type":"ContainerStarted","Data":"f0da6690b07fad5148b25bf973e0751acadf0eca927189833880b1fe3c526cf4"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.446686 4958 generic.go:334] "Generic (PLEG): container finished" podID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerID="75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" exitCode=137 Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.446736 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e6eb8aab-9b25-4861-972a-b100ba14ab24","Type":"ContainerDied","Data":"75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.448463 4958 generic.go:334] "Generic (PLEG): container finished" podID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" exitCode=137 Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.450577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e","Type":"ContainerDied","Data":"dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4"} Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.483139 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=26.483117902 podStartE2EDuration="26.483117902s" podCreationTimestamp="2025-12-06 05:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:49.478047305 +0000 UTC m=+1540.011818088" watchObservedRunningTime="2025-12-06 05:53:49.483117902 +0000 UTC m=+1540.016888665" Dec 06 05:53:49 crc kubenswrapper[4958]: I1206 05:53:49.499684 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2scr7" podStartSLOduration=6.605415676 podStartE2EDuration="1m16.499663867s" podCreationTimestamp="2025-12-06 05:52:33 +0000 UTC" firstStartedPulling="2025-12-06 05:52:35.104912711 +0000 UTC m=+1465.638683464" lastFinishedPulling="2025-12-06 05:53:44.999160892 +0000 UTC m=+1535.532931655" observedRunningTime="2025-12-06 05:53:49.495838063 +0000 UTC m=+1540.029608836" watchObservedRunningTime="2025-12-06 05:53:49.499663867 +0000 UTC m=+1540.033434630" Dec 06 05:53:50 crc kubenswrapper[4958]: I1206 05:53:50.459862 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:53:50 crc kubenswrapper[4958]: I1206 05:53:50.488145 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cdf79dfd7-5h4gn" podStartSLOduration=3.488119863 podStartE2EDuration="3.488119863s" podCreationTimestamp="2025-12-06 05:53:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:50.480491867 +0000 UTC m=+1541.014262640" watchObservedRunningTime="2025-12-06 05:53:50.488119863 +0000 UTC m=+1541.021890666" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.505859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-api-0" event={"ID":"859ad21b-442c-4e81-991c-fff351e6f635","Type":"ContainerStarted","Data":"23ef36b451121fae115e403425567b0cef6a6260d45be496fc52924fead7d340"} Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.507734 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.527402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerStarted","Data":"40b1dc4197111e8b813842bb4fb3027f60c6121238deac95d6403eb96469739c"} Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.543117 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-api-0" podStartSLOduration=5.543094117 podStartE2EDuration="5.543094117s" podCreationTimestamp="2025-12-06 05:53:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:51.528592838 +0000 UTC m=+1542.062363601" watchObservedRunningTime="2025-12-06 05:53:51.543094117 +0000 UTC m=+1542.076864880" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.551750 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=28.551731419 podStartE2EDuration="28.551731419s" podCreationTimestamp="2025-12-06 05:53:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:51.548889103 +0000 UTC m=+1542.082659856" watchObservedRunningTime="2025-12-06 05:53:51.551731419 +0000 UTC m=+1542.085502182" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.673086 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.680288 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741514 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle\") pod \"e6eb8aab-9b25-4861-972a-b100ba14ab24\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741645 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle\") pod \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741679 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data\") pod \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741730 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs\") pod \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741786 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca\") pod \"e6eb8aab-9b25-4861-972a-b100ba14ab24\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741808 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data\") pod \"e6eb8aab-9b25-4861-972a-b100ba14ab24\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741863 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62lrs\" (UniqueName: \"kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs\") pod \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\" (UID: \"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs\") pod \"e6eb8aab-9b25-4861-972a-b100ba14ab24\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.741936 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsh7t\" (UniqueName: \"kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t\") pod \"e6eb8aab-9b25-4861-972a-b100ba14ab24\" (UID: \"e6eb8aab-9b25-4861-972a-b100ba14ab24\") " Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.742530 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs" (OuterVolumeSpecName: "logs") pod "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" (UID: "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.745934 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs" (OuterVolumeSpecName: "logs") pod "e6eb8aab-9b25-4861-972a-b100ba14ab24" (UID: "e6eb8aab-9b25-4861-972a-b100ba14ab24"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.748799 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t" (OuterVolumeSpecName: "kube-api-access-tsh7t") pod "e6eb8aab-9b25-4861-972a-b100ba14ab24" (UID: "e6eb8aab-9b25-4861-972a-b100ba14ab24"). InnerVolumeSpecName "kube-api-access-tsh7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.749716 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs" (OuterVolumeSpecName: "kube-api-access-62lrs") pod "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" (UID: "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e"). InnerVolumeSpecName "kube-api-access-62lrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.785705 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e6eb8aab-9b25-4861-972a-b100ba14ab24" (UID: "e6eb8aab-9b25-4861-972a-b100ba14ab24"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.797137 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" (UID: "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.821775 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data" (OuterVolumeSpecName: "config-data") pod "e6eb8aab-9b25-4861-972a-b100ba14ab24" (UID: "e6eb8aab-9b25-4861-972a-b100ba14ab24"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.836944 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "e6eb8aab-9b25-4861-972a-b100ba14ab24" (UID: "e6eb8aab-9b25-4861-972a-b100ba14ab24"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.841040 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-api-0" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843715 4958 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843744 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62lrs\" (UniqueName: \"kubernetes.io/projected/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-kube-api-access-62lrs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843764 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e6eb8aab-9b25-4861-972a-b100ba14ab24-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843772 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsh7t\" (UniqueName: \"kubernetes.io/projected/e6eb8aab-9b25-4861-972a-b100ba14ab24-kube-api-access-tsh7t\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843782 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb8aab-9b25-4861-972a-b100ba14ab24-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843792 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.843800 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.865047 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data" (OuterVolumeSpecName: "config-data") pod "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" (UID: "92bfdbb2-cdd9-49b3-80cb-5aa52422d18e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:53:51 crc kubenswrapper[4958]: I1206 05:53:51.945057 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.435966 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-776ddc8896-9vdrs" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.576398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"e6eb8aab-9b25-4861-972a-b100ba14ab24","Type":"ContainerDied","Data":"dc10c820657929c180d82d4fff7abc6475c692868711b8fba830665b18d376a5"} Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.576462 4958 scope.go:117] "RemoveContainer" containerID="75dcd5f6a58bf6cb43a0b75909a5103cff83f1e8f7355740d2c6353495668b30" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.576609 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.583263 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"92bfdbb2-cdd9-49b3-80cb-5aa52422d18e","Type":"ContainerDied","Data":"12b99619f6a130a37cda13a3b81c297cbf2cb61bccfee475929ae11e3a54aebc"} Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.583689 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.616856 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.634807 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.654060 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.672511 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.692527 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: E1206 05:53:52.692950 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.692966 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:52 crc kubenswrapper[4958]: E1206 05:53:52.692983 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerName="watcher-decision-engine" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.692989 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerName="watcher-decision-engine" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.693196 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" containerName="watcher-applier" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.693216 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" containerName="watcher-decision-engine" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.693848 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.699650 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.700768 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.702657 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.706164 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-applier-config-data" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.712966 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.726244 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758128 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758221 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2tw\" (UniqueName: \"kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758251 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758275 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-config-data\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xr4k\" (UniqueName: \"kubernetes.io/projected/55820dd9-6ca0-448b-8dc4-e92ddce617b7-kube-api-access-8xr4k\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55820dd9-6ca0-448b-8dc4-e92ddce617b7-logs\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.758411 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860105 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860197 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz2tw\" (UniqueName: \"kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-config-data\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860319 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860344 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xr4k\" (UniqueName: \"kubernetes.io/projected/55820dd9-6ca0-448b-8dc4-e92ddce617b7-kube-api-access-8xr4k\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860415 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55820dd9-6ca0-448b-8dc4-e92ddce617b7-logs\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860530 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.860566 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.861263 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55820dd9-6ca0-448b-8dc4-e92ddce617b7-logs\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.861864 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.870093 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.870111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.870184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-config-data\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.880448 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz2tw\" (UniqueName: \"kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.884163 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/55820dd9-6ca0-448b-8dc4-e92ddce617b7-combined-ca-bundle\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.890573 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xr4k\" (UniqueName: \"kubernetes.io/projected/55820dd9-6ca0-448b-8dc4-e92ddce617b7-kube-api-access-8xr4k\") pod \"watcher-applier-0\" (UID: \"55820dd9-6ca0-448b-8dc4-e92ddce617b7\") " pod="openstack/watcher-applier-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.891559 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data\") pod \"watcher-decision-engine-0\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:53:52 crc kubenswrapper[4958]: I1206 05:53:52.914607 4958 scope.go:117] "RemoveContainer" containerID="dd358b0ecd436d9c9a2637581f113a5e82db356ffb0213e22b2442b68aa080e4" Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.017504 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.028999 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-applier-0" Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.581709 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:53:53 crc kubenswrapper[4958]: W1206 05:53:53.591731 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcbe2099_3d41_4f69_be20_47d96498cb25.slice/crio-7dcfd1cc1f7161dda42df8d8cfd542af228bef3e784b6a767cb4cdbb7097976d WatchSource:0}: Error finding container 7dcfd1cc1f7161dda42df8d8cfd542af228bef3e784b6a767cb4cdbb7097976d: Status 404 returned error can't find the container with id 7dcfd1cc1f7161dda42df8d8cfd542af228bef3e784b6a767cb4cdbb7097976d Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.595938 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.600552 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-applier-0"] Dec 06 05:53:53 crc kubenswrapper[4958]: W1206 05:53:53.607851 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55820dd9_6ca0_448b_8dc4_e92ddce617b7.slice/crio-58e06950a985c9366fd8668954f279c74a19da6b76b1bc573a0f3cb0ce054579 WatchSource:0}: Error finding container 58e06950a985c9366fd8668954f279c74a19da6b76b1bc573a0f3cb0ce054579: Status 404 returned error can't find the container with id 58e06950a985c9366fd8668954f279c74a19da6b76b1bc573a0f3cb0ce054579 Dec 06 05:53:53 crc kubenswrapper[4958]: I1206 05:53:53.773981 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92bfdbb2-cdd9-49b3-80cb-5aa52422d18e" path="/var/lib/kubelet/pods/92bfdbb2-cdd9-49b3-80cb-5aa52422d18e/volumes" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.487922 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6eb8aab-9b25-4861-972a-b100ba14ab24" path="/var/lib/kubelet/pods/e6eb8aab-9b25-4861-972a-b100ba14ab24/volumes" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.488864 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.488900 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.488911 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.488922 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9s9jv"] Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491080 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491134 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s9jv"] Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491154 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491168 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491177 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491186 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491196 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491204 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491277 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491300 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491327 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.491396 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.591524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-utilities\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.591686 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-catalog-content\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.591755 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdq9f\" (UniqueName: \"kubernetes.io/projected/1865345c-50a3-47fe-90b6-ee8e165c2391-kube-api-access-vdq9f\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.607011 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"55820dd9-6ca0-448b-8dc4-e92ddce617b7","Type":"ContainerStarted","Data":"58e06950a985c9366fd8668954f279c74a19da6b76b1bc573a0f3cb0ce054579"} Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.609854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerStarted","Data":"7dcfd1cc1f7161dda42df8d8cfd542af228bef3e784b6a767cb4cdbb7097976d"} Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.693538 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-utilities\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.693674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-catalog-content\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.693725 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdq9f\" (UniqueName: \"kubernetes.io/projected/1865345c-50a3-47fe-90b6-ee8e165c2391-kube-api-access-vdq9f\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.694183 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-utilities\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.694504 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1865345c-50a3-47fe-90b6-ee8e165c2391-catalog-content\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.721575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdq9f\" (UniqueName: \"kubernetes.io/projected/1865345c-50a3-47fe-90b6-ee8e165c2391-kube-api-access-vdq9f\") pod \"community-operators-9s9jv\" (UID: \"1865345c-50a3-47fe-90b6-ee8e165c2391\") " pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:54 crc kubenswrapper[4958]: I1206 05:53:54.811105 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:53:56 crc kubenswrapper[4958]: I1206 05:53:56.020621 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s9jv"] Dec 06 05:53:56 crc kubenswrapper[4958]: I1206 05:53:56.644666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerStarted","Data":"1ffef4148dd9bc7bfc6cf96cb27788096f601ce115e56b4020889a6231909df0"} Dec 06 05:53:56 crc kubenswrapper[4958]: I1206 05:53:56.646004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s9jv" event={"ID":"1865345c-50a3-47fe-90b6-ee8e165c2391","Type":"ContainerStarted","Data":"83b7b984f3f5164d0ca3c0f023f0a292cadaa88eedf12eaaf9eac76315528726"} Dec 06 05:53:56 crc kubenswrapper[4958]: I1206 05:53:56.822881 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-api-0" Dec 06 05:53:56 crc kubenswrapper[4958]: I1206 05:53:56.833255 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-api-0" Dec 06 05:53:57 crc kubenswrapper[4958]: I1206 05:53:57.670690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerStarted","Data":"4616aa5ceafa9d22dad86de24f0a7df5f712f14e053fa799b5b835b94b06efea"} Dec 06 05:53:57 crc kubenswrapper[4958]: I1206 05:53:57.681804 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerID="1ffef4148dd9bc7bfc6cf96cb27788096f601ce115e56b4020889a6231909df0" exitCode=0 Dec 06 05:53:57 crc kubenswrapper[4958]: I1206 05:53:57.681833 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerDied","Data":"1ffef4148dd9bc7bfc6cf96cb27788096f601ce115e56b4020889a6231909df0"} Dec 06 05:53:57 crc kubenswrapper[4958]: I1206 05:53:57.699558 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-api-0" Dec 06 05:53:58 crc kubenswrapper[4958]: I1206 05:53:58.914871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.144705 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.694754 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.710572 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-applier-0" event={"ID":"55820dd9-6ca0-448b-8dc4-e92ddce617b7","Type":"ContainerStarted","Data":"eaa28fbb4281edd25a56f2b1d7ffb99335ccdbe2f98b6f2fc30b699e8a84b1c7"} Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.712413 4958 generic.go:334] "Generic (PLEG): container finished" podID="1865345c-50a3-47fe-90b6-ee8e165c2391" containerID="c02fff5e5fd62ea8937a5491ded90a4329dcfc03d6f6aff842cf616992432d9c" exitCode=0 Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.712604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s9jv" event={"ID":"1865345c-50a3-47fe-90b6-ee8e165c2391","Type":"ContainerDied","Data":"c02fff5e5fd62ea8937a5491ded90a4329dcfc03d6f6aff842cf616992432d9c"} Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.880835 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-applier-0" podStartSLOduration=7.88080854 podStartE2EDuration="7.88080854s" podCreationTimestamp="2025-12-06 05:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:59.824354192 +0000 UTC m=+1550.358124955" watchObservedRunningTime="2025-12-06 05:53:59.88080854 +0000 UTC m=+1550.414579303" Dec 06 05:53:59 crc kubenswrapper[4958]: I1206 05:53:59.922523 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=7.922502131 podStartE2EDuration="7.922502131s" podCreationTimestamp="2025-12-06 05:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:53:59.889644417 +0000 UTC m=+1550.423415200" watchObservedRunningTime="2025-12-06 05:53:59.922502131 +0000 UTC m=+1550.456272884" Dec 06 05:54:01 crc kubenswrapper[4958]: I1206 05:54:01.171575 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 05:54:02 crc kubenswrapper[4958]: I1206 05:54:02.434989 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-776ddc8896-9vdrs" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.159:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.159:8443: connect: connection refused" Dec 06 05:54:02 crc kubenswrapper[4958]: I1206 05:54:02.435099 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.018880 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.029631 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-applier-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.029763 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-applier-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.050159 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.063232 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-applier-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.747441 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.779999 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:03 crc kubenswrapper[4958]: I1206 05:54:03.780039 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-applier-0" Dec 06 05:54:07 crc kubenswrapper[4958]: E1206 05:54:07.568214 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c" Dec 06 05:54:07 crc kubenswrapper[4958]: E1206 05:54:07.569136 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24@sha256:6b929971283d69f485a7d3e449fb5a3dd65d5a4de585c73419e776821d00062c,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r5tpw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(9664ab8f-eb78-4177-8847-54af6ae2fce5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 06 05:54:07 crc kubenswrapper[4958]: E1206 05:54:07.570415 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.786778 4958 generic.go:334] "Generic (PLEG): container finished" podID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerID="df832c624dd77d9345ae9355d24eaaa0bdfbedb70e95b963907b7ae1722b9b0b" exitCode=137 Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.787408 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" containerName="ceilometer-notification-agent" containerID="cri-o://2dde5f6c9c5df2e54ec2f9bb92e38f692bf32d7fbcc3ff87afd8b2726499226e" gracePeriod=30 Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.788120 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerDied","Data":"df832c624dd77d9345ae9355d24eaaa0bdfbedb70e95b963907b7ae1722b9b0b"} Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.910583 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.990842 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991303 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991377 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs" (OuterVolumeSpecName: "logs") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991501 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991572 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w7ln\" (UniqueName: \"kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991638 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.991815 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle\") pod \"e3d20216-bf3b-43e0-b212-e05057a211fd\" (UID: \"e3d20216-bf3b-43e0-b212-e05057a211fd\") " Dec 06 05:54:07 crc kubenswrapper[4958]: I1206 05:54:07.992287 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e3d20216-bf3b-43e0-b212-e05057a211fd-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.013101 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.014732 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln" (OuterVolumeSpecName: "kube-api-access-2w7ln") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "kube-api-access-2w7ln". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.018788 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts" (OuterVolumeSpecName: "scripts") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.020649 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data" (OuterVolumeSpecName: "config-data") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.026630 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.070781 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "e3d20216-bf3b-43e0-b212-e05057a211fd" (UID: "e3d20216-bf3b-43e0-b212-e05057a211fd"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095207 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095251 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e3d20216-bf3b-43e0-b212-e05057a211fd-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095267 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095283 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w7ln\" (UniqueName: \"kubernetes.io/projected/e3d20216-bf3b-43e0-b212-e05057a211fd-kube-api-access-2w7ln\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095295 4958 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.095306 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e3d20216-bf3b-43e0-b212-e05057a211fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.797900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-776ddc8896-9vdrs" event={"ID":"e3d20216-bf3b-43e0-b212-e05057a211fd","Type":"ContainerDied","Data":"407fad10dbaf6edd8dd67cd9c7398a121edd097d556954eaf2aa616524dda3eb"} Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.800359 4958 scope.go:117] "RemoveContainer" containerID="42d22673a243c1cc6a53c4d422e4407a3f07c62add34feecb8b192fbcd4d95cb" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.798158 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-776ddc8896-9vdrs" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.814260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerStarted","Data":"810f8021bd3954d5718f6edc72365abd45900c1feec1bb146ac02670440c2609"} Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.838253 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btkdc" podStartSLOduration=21.517259795 podStartE2EDuration="39.838230104s" podCreationTimestamp="2025-12-06 05:53:29 +0000 UTC" firstStartedPulling="2025-12-06 05:53:49.432514251 +0000 UTC m=+1539.966285004" lastFinishedPulling="2025-12-06 05:54:07.75348455 +0000 UTC m=+1558.287255313" observedRunningTime="2025-12-06 05:54:08.833048295 +0000 UTC m=+1559.366819058" watchObservedRunningTime="2025-12-06 05:54:08.838230104 +0000 UTC m=+1559.372000867" Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.860379 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.871762 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-776ddc8896-9vdrs"] Dec 06 05:54:08 crc kubenswrapper[4958]: I1206 05:54:08.987125 4958 scope.go:117] "RemoveContainer" containerID="df832c624dd77d9345ae9355d24eaaa0bdfbedb70e95b963907b7ae1722b9b0b" Dec 06 05:54:09 crc kubenswrapper[4958]: I1206 05:54:09.425083 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:54:09 crc kubenswrapper[4958]: I1206 05:54:09.425144 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:54:09 crc kubenswrapper[4958]: I1206 05:54:09.783678 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" path="/var/lib/kubelet/pods/e3d20216-bf3b-43e0-b212-e05057a211fd/volumes" Dec 06 05:54:09 crc kubenswrapper[4958]: I1206 05:54:09.830609 4958 generic.go:334] "Generic (PLEG): container finished" podID="9664ab8f-eb78-4177-8847-54af6ae2fce5" containerID="2dde5f6c9c5df2e54ec2f9bb92e38f692bf32d7fbcc3ff87afd8b2726499226e" exitCode=0 Dec 06 05:54:09 crc kubenswrapper[4958]: I1206 05:54:09.831583 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9664ab8f-eb78-4177-8847-54af6ae2fce5","Type":"ContainerDied","Data":"2dde5f6c9c5df2e54ec2f9bb92e38f692bf32d7fbcc3ff87afd8b2726499226e"} Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.006464 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.039853 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.039933 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.039969 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.040010 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.040056 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.040080 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5tpw\" (UniqueName: \"kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.040108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data\") pod \"9664ab8f-eb78-4177-8847-54af6ae2fce5\" (UID: \"9664ab8f-eb78-4177-8847-54af6ae2fce5\") " Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.047590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw" (OuterVolumeSpecName: "kube-api-access-r5tpw") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "kube-api-access-r5tpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.047860 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.048043 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.048188 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.051122 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts" (OuterVolumeSpecName: "scripts") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.072827 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.074302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data" (OuterVolumeSpecName: "config-data") pod "9664ab8f-eb78-4177-8847-54af6ae2fce5" (UID: "9664ab8f-eb78-4177-8847-54af6ae2fce5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143436 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143503 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143519 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5tpw\" (UniqueName: \"kubernetes.io/projected/9664ab8f-eb78-4177-8847-54af6ae2fce5-kube-api-access-r5tpw\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143538 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143550 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143559 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9664ab8f-eb78-4177-8847-54af6ae2fce5-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.143567 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9664ab8f-eb78-4177-8847-54af6ae2fce5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.478721 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:10 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:10 crc kubenswrapper[4958]: > Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.840996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9664ab8f-eb78-4177-8847-54af6ae2fce5","Type":"ContainerDied","Data":"1c24d751e159f00588939c1816d45dd8ec03dab3fbf8924f1534cf14f9ec15bf"} Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.841056 4958 scope.go:117] "RemoveContainer" containerID="2dde5f6c9c5df2e54ec2f9bb92e38f692bf32d7fbcc3ff87afd8b2726499226e" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.841247 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.901493 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.908241 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.927519 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:54:10 crc kubenswrapper[4958]: E1206 05:54:10.927842 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" containerName="ceilometer-notification-agent" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.927855 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" containerName="ceilometer-notification-agent" Dec 06 05:54:10 crc kubenswrapper[4958]: E1206 05:54:10.927882 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon-log" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.927887 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon-log" Dec 06 05:54:10 crc kubenswrapper[4958]: E1206 05:54:10.927904 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.927910 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.928063 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon-log" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.928076 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3d20216-bf3b-43e0-b212-e05057a211fd" containerName="horizon" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.928093 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" containerName="ceilometer-notification-agent" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.929819 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.931655 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.933920 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960708 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960755 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960776 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960793 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn9rg\" (UniqueName: \"kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960842 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.960887 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:10 crc kubenswrapper[4958]: I1206 05:54:10.965815 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062417 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062529 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062575 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn9rg\" (UniqueName: \"kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.062654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.063213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.063325 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.068938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.069349 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.069378 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.078160 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.086959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn9rg\" (UniqueName: \"kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg\") pod \"ceilometer-0\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.248340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:54:11 crc kubenswrapper[4958]: I1206 05:54:11.773353 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9664ab8f-eb78-4177-8847-54af6ae2fce5" path="/var/lib/kubelet/pods/9664ab8f-eb78-4177-8847-54af6ae2fce5/volumes" Dec 06 05:54:16 crc kubenswrapper[4958]: I1206 05:54:16.613547 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:54:16 crc kubenswrapper[4958]: W1206 05:54:16.625081 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6176dbaf_8ccf_41de_8d4b_0b5ac9328569.slice/crio-21296cba31780c1eeac583151ff6fd4e0a612fbad5551a04efaaf17e10d5bfdc WatchSource:0}: Error finding container 21296cba31780c1eeac583151ff6fd4e0a612fbad5551a04efaaf17e10d5bfdc: Status 404 returned error can't find the container with id 21296cba31780c1eeac583151ff6fd4e0a612fbad5551a04efaaf17e10d5bfdc Dec 06 05:54:16 crc kubenswrapper[4958]: I1206 05:54:16.907775 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerStarted","Data":"21296cba31780c1eeac583151ff6fd4e0a612fbad5551a04efaaf17e10d5bfdc"} Dec 06 05:54:16 crc kubenswrapper[4958]: I1206 05:54:16.913507 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s9jv" event={"ID":"1865345c-50a3-47fe-90b6-ee8e165c2391","Type":"ContainerStarted","Data":"cc11b8f30f1c5734851984fad09b0b26fbe3a1416bd6f80b64eef4793535ebb1"} Dec 06 05:54:17 crc kubenswrapper[4958]: I1206 05:54:17.938983 4958 generic.go:334] "Generic (PLEG): container finished" podID="1865345c-50a3-47fe-90b6-ee8e165c2391" containerID="cc11b8f30f1c5734851984fad09b0b26fbe3a1416bd6f80b64eef4793535ebb1" exitCode=0 Dec 06 05:54:17 crc kubenswrapper[4958]: I1206 05:54:17.939156 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s9jv" event={"ID":"1865345c-50a3-47fe-90b6-ee8e165c2391","Type":"ContainerDied","Data":"cc11b8f30f1c5734851984fad09b0b26fbe3a1416bd6f80b64eef4793535ebb1"} Dec 06 05:54:17 crc kubenswrapper[4958]: I1206 05:54:17.943261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerStarted","Data":"004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0"} Dec 06 05:54:20 crc kubenswrapper[4958]: I1206 05:54:20.483531 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:20 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:20 crc kubenswrapper[4958]: > Dec 06 05:54:20 crc kubenswrapper[4958]: I1206 05:54:20.975111 4958 generic.go:334] "Generic (PLEG): container finished" podID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerID="4616aa5ceafa9d22dad86de24f0a7df5f712f14e053fa799b5b835b94b06efea" exitCode=1 Dec 06 05:54:20 crc kubenswrapper[4958]: I1206 05:54:20.975164 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerDied","Data":"4616aa5ceafa9d22dad86de24f0a7df5f712f14e053fa799b5b835b94b06efea"} Dec 06 05:54:20 crc kubenswrapper[4958]: I1206 05:54:20.976075 4958 scope.go:117] "RemoveContainer" containerID="4616aa5ceafa9d22dad86de24f0a7df5f712f14e053fa799b5b835b94b06efea" Dec 06 05:54:21 crc kubenswrapper[4958]: I1206 05:54:21.985045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerStarted","Data":"5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850"} Dec 06 05:54:23 crc kubenswrapper[4958]: I1206 05:54:23.017951 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:23 crc kubenswrapper[4958]: I1206 05:54:23.018334 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:23 crc kubenswrapper[4958]: I1206 05:54:23.074461 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:23 crc kubenswrapper[4958]: I1206 05:54:23.385293 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-cdf79dfd7-5h4gn" Dec 06 05:54:24 crc kubenswrapper[4958]: I1206 05:54:24.031793 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.731017 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="0523eb0f-9fe1-49d4-a3b4-6a872317c136" containerName="galera" probeResult="failure" output="command timed out" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.732592 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="0523eb0f-9fe1-49d4-a3b4-6a872317c136" containerName="galera" probeResult="failure" output="command timed out" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.776161 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.777259 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.778328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.781266 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.781668 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-29t88" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.781827 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.916439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pqpj\" (UniqueName: \"kubernetes.io/projected/5e310020-e259-46c5-8928-f587abdf0577-kube-api-access-7pqpj\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.916521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-openstack-config-secret\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.916696 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e310020-e259-46c5-8928-f587abdf0577-openstack-config\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:29 crc kubenswrapper[4958]: I1206 05:54:29.916731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.017949 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pqpj\" (UniqueName: \"kubernetes.io/projected/5e310020-e259-46c5-8928-f587abdf0577-kube-api-access-7pqpj\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.018021 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-openstack-config-secret\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.018143 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e310020-e259-46c5-8928-f587abdf0577-openstack-config\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.018179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.020273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/5e310020-e259-46c5-8928-f587abdf0577-openstack-config\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.024901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-openstack-config-secret\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.032906 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pqpj\" (UniqueName: \"kubernetes.io/projected/5e310020-e259-46c5-8928-f587abdf0577-kube-api-access-7pqpj\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.041217 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5e310020-e259-46c5-8928-f587abdf0577-combined-ca-bundle\") pod \"openstackclient\" (UID: \"5e310020-e259-46c5-8928-f587abdf0577\") " pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.110945 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 06 05:54:30 crc kubenswrapper[4958]: I1206 05:54:30.495502 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:30 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:30 crc kubenswrapper[4958]: > Dec 06 05:54:31 crc kubenswrapper[4958]: I1206 05:54:31.064262 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9s9jv" event={"ID":"1865345c-50a3-47fe-90b6-ee8e165c2391","Type":"ContainerStarted","Data":"2ef9ba1435162f23ebc047d6d5d91b7f850217368ffaafe05f04bc1ef187a159"} Dec 06 05:54:31 crc kubenswrapper[4958]: I1206 05:54:31.434706 4958 patch_prober.go:28] interesting pod/router-default-5444994796-ph6g5 container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 05:54:31 crc kubenswrapper[4958]: I1206 05:54:31.434770 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-ph6g5" podUID="fc16acb8-14a0-4b1d-ba72-9a53f2bdb622" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:54:34 crc kubenswrapper[4958]: I1206 05:54:34.812217 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:54:34 crc kubenswrapper[4958]: I1206 05:54:34.812673 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:54:35 crc kubenswrapper[4958]: I1206 05:54:35.857721 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-9s9jv" podUID="1865345c-50a3-47fe-90b6-ee8e165c2391" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:35 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:35 crc kubenswrapper[4958]: > Dec 06 05:54:40 crc kubenswrapper[4958]: I1206 05:54:40.479753 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:40 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:40 crc kubenswrapper[4958]: > Dec 06 05:54:42 crc kubenswrapper[4958]: I1206 05:54:42.298770 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9s9jv" podStartSLOduration=29.473769025 podStartE2EDuration="48.298745965s" podCreationTimestamp="2025-12-06 05:53:54 +0000 UTC" firstStartedPulling="2025-12-06 05:54:04.904849689 +0000 UTC m=+1555.438620452" lastFinishedPulling="2025-12-06 05:54:23.729826629 +0000 UTC m=+1574.263597392" observedRunningTime="2025-12-06 05:54:34.123379797 +0000 UTC m=+1584.657150560" watchObservedRunningTime="2025-12-06 05:54:42.298745965 +0000 UTC m=+1592.832516728" Dec 06 05:54:42 crc kubenswrapper[4958]: I1206 05:54:42.301658 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 06 05:54:43 crc kubenswrapper[4958]: I1206 05:54:43.177524 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5e310020-e259-46c5-8928-f587abdf0577","Type":"ContainerStarted","Data":"aad5920467e9a7231e64bd0ca0b63c7120a0f11a11423aff00e3f2f6c773965d"} Dec 06 05:54:44 crc kubenswrapper[4958]: I1206 05:54:44.188857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerStarted","Data":"e5cbd5221e430779ce7691325df6a1c16b3ba0b95d835c7bdb4574f3a49a5aab"} Dec 06 05:54:44 crc kubenswrapper[4958]: I1206 05:54:44.859626 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:54:44 crc kubenswrapper[4958]: I1206 05:54:44.921044 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9s9jv" Dec 06 05:54:45 crc kubenswrapper[4958]: I1206 05:54:45.953910 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9s9jv"] Dec 06 05:54:45 crc kubenswrapper[4958]: I1206 05:54:45.990380 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:54:45 crc kubenswrapper[4958]: I1206 05:54:45.990634 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bhln7" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" containerID="cri-o://539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" gracePeriod=2 Dec 06 05:54:50 crc kubenswrapper[4958]: E1206 05:54:50.310364 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:54:50 crc kubenswrapper[4958]: E1206 05:54:50.311335 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:54:50 crc kubenswrapper[4958]: E1206 05:54:50.311822 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:54:50 crc kubenswrapper[4958]: E1206 05:54:50.311853 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bhln7" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:54:50 crc kubenswrapper[4958]: I1206 05:54:50.473729 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:54:50 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:54:50 crc kubenswrapper[4958]: > Dec 06 05:54:52 crc kubenswrapper[4958]: I1206 05:54:52.070879 4958 generic.go:334] "Generic (PLEG): container finished" podID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" exitCode=0 Dec 06 05:54:52 crc kubenswrapper[4958]: I1206 05:54:52.070973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerDied","Data":"539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10"} Dec 06 05:55:00 crc kubenswrapper[4958]: E1206 05:55:00.310526 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:00 crc kubenswrapper[4958]: E1206 05:55:00.312927 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:00 crc kubenswrapper[4958]: E1206 05:55:00.313151 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:00 crc kubenswrapper[4958]: E1206 05:55:00.313183 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bhln7" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:55:00 crc kubenswrapper[4958]: I1206 05:55:00.483990 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" probeResult="failure" output=< Dec 06 05:55:00 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 05:55:00 crc kubenswrapper[4958]: > Dec 06 05:55:09 crc kubenswrapper[4958]: I1206 05:55:09.481824 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:55:09 crc kubenswrapper[4958]: I1206 05:55:09.541915 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:55:09 crc kubenswrapper[4958]: I1206 05:55:09.722346 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:55:10 crc kubenswrapper[4958]: E1206 05:55:10.310673 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:10 crc kubenswrapper[4958]: E1206 05:55:10.311356 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:10 crc kubenswrapper[4958]: E1206 05:55:10.311769 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 05:55:10 crc kubenswrapper[4958]: E1206 05:55:10.311806 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-bhln7" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:55:11 crc kubenswrapper[4958]: I1206 05:55:11.340054 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-btkdc" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" containerID="cri-o://810f8021bd3954d5718f6edc72365abd45900c1feec1bb146ac02670440c2609" gracePeriod=2 Dec 06 05:55:11 crc kubenswrapper[4958]: E1206 05:55:11.720334 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current" Dec 06 05:55:11 crc kubenswrapper[4958]: E1206 05:55:11.720397 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current" Dec 06 05:55:11 crc kubenswrapper[4958]: E1206 05:55:11.720617 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n568hf6h686hbdh648h5c6h65fh699h569h5d5h658h5dch65dh674h544h9h9fh66ch9h5d4h6h59bh686hf4h5bdh5bch65fh64fhddh68dh7bh5d8q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7pqpj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(5e310020-e259-46c5-8928-f587abdf0577): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 05:55:11 crc kubenswrapper[4958]: E1206 05:55:11.721900 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="5e310020-e259-46c5-8928-f587abdf0577" Dec 06 05:55:12 crc kubenswrapper[4958]: I1206 05:55:12.354763 4958 generic.go:334] "Generic (PLEG): container finished" podID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerID="810f8021bd3954d5718f6edc72365abd45900c1feec1bb146ac02670440c2609" exitCode=0 Dec 06 05:55:12 crc kubenswrapper[4958]: I1206 05:55:12.354896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerDied","Data":"810f8021bd3954d5718f6edc72365abd45900c1feec1bb146ac02670440c2609"} Dec 06 05:55:14 crc kubenswrapper[4958]: E1206 05:55:14.839644 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-openstackclient:current\\\"\"" pod="openstack/openstackclient" podUID="5e310020-e259-46c5-8928-f587abdf0577" Dec 06 05:55:14 crc kubenswrapper[4958]: I1206 05:55:14.939097 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.005353 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7btj\" (UniqueName: \"kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj\") pod \"4dcd0eed-120b-4aec-bb22-3fabed037aae\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.005464 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content\") pod \"4dcd0eed-120b-4aec-bb22-3fabed037aae\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.005686 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities\") pod \"4dcd0eed-120b-4aec-bb22-3fabed037aae\" (UID: \"4dcd0eed-120b-4aec-bb22-3fabed037aae\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.006503 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities" (OuterVolumeSpecName: "utilities") pod "4dcd0eed-120b-4aec-bb22-3fabed037aae" (UID: "4dcd0eed-120b-4aec-bb22-3fabed037aae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.012224 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj" (OuterVolumeSpecName: "kube-api-access-f7btj") pod "4dcd0eed-120b-4aec-bb22-3fabed037aae" (UID: "4dcd0eed-120b-4aec-bb22-3fabed037aae"). InnerVolumeSpecName "kube-api-access-f7btj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.057135 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4dcd0eed-120b-4aec-bb22-3fabed037aae" (UID: "4dcd0eed-120b-4aec-bb22-3fabed037aae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.107757 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.107790 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7btj\" (UniqueName: \"kubernetes.io/projected/4dcd0eed-120b-4aec-bb22-3fabed037aae-kube-api-access-f7btj\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.107801 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4dcd0eed-120b-4aec-bb22-3fabed037aae-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.392731 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bhln7" event={"ID":"4dcd0eed-120b-4aec-bb22-3fabed037aae","Type":"ContainerDied","Data":"8b801e410acb5299fd508e1fd0377f0a596551b96c24df4f5d1fe0fee6814d07"} Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.393143 4958 scope.go:117] "RemoveContainer" containerID="539a60fdc12f3a57f4857e1a64b2631bbf583a7071163f1613ad907bfef92d10" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.392900 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bhln7" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.455519 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.463193 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bhln7"] Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.673416 4958 scope.go:117] "RemoveContainer" containerID="7067492963bd2e3a9cf70633b2a1e0286921c8ba6a95cbfafe95a12742c8145e" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.774650 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" path="/var/lib/kubelet/pods/4dcd0eed-120b-4aec-bb22-3fabed037aae/volumes" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.791825 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.841300 4958 scope.go:117] "RemoveContainer" containerID="abbcca7cb91bb63abb0753e3efbb68d551e25621c88e30d5fba93512d2fbb043" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.921762 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities\") pod \"83d76c67-52ad-480c-9f07-9620f6ed6a42\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.921877 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content\") pod \"83d76c67-52ad-480c-9f07-9620f6ed6a42\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.922077 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d65rj\" (UniqueName: \"kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj\") pod \"83d76c67-52ad-480c-9f07-9620f6ed6a42\" (UID: \"83d76c67-52ad-480c-9f07-9620f6ed6a42\") " Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.923340 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities" (OuterVolumeSpecName: "utilities") pod "83d76c67-52ad-480c-9f07-9620f6ed6a42" (UID: "83d76c67-52ad-480c-9f07-9620f6ed6a42"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:15 crc kubenswrapper[4958]: I1206 05:55:15.925635 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj" (OuterVolumeSpecName: "kube-api-access-d65rj") pod "83d76c67-52ad-480c-9f07-9620f6ed6a42" (UID: "83d76c67-52ad-480c-9f07-9620f6ed6a42"). InnerVolumeSpecName "kube-api-access-d65rj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.024165 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.024462 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d65rj\" (UniqueName: \"kubernetes.io/projected/83d76c67-52ad-480c-9f07-9620f6ed6a42-kube-api-access-d65rj\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.048149 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83d76c67-52ad-480c-9f07-9620f6ed6a42" (UID: "83d76c67-52ad-480c-9f07-9620f6ed6a42"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.126635 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83d76c67-52ad-480c-9f07-9620f6ed6a42-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.408827 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btkdc" event={"ID":"83d76c67-52ad-480c-9f07-9620f6ed6a42","Type":"ContainerDied","Data":"4ee00a71eeacd8af9617dc193def6755d2f629da4c983d9f7ebc7e96a96623db"} Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.408923 4958 scope.go:117] "RemoveContainer" containerID="810f8021bd3954d5718f6edc72365abd45900c1feec1bb146ac02670440c2609" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.409126 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btkdc" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.436367 4958 scope.go:117] "RemoveContainer" containerID="1ffef4148dd9bc7bfc6cf96cb27788096f601ce115e56b4020889a6231909df0" Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.478655 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.486638 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-btkdc"] Dec 06 05:55:16 crc kubenswrapper[4958]: I1206 05:55:16.486950 4958 scope.go:117] "RemoveContainer" containerID="9c1397d07a3e63cbad57931dc557ad164fe1616cbc9f246ee594865fee46f07f" Dec 06 05:55:17 crc kubenswrapper[4958]: I1206 05:55:17.772645 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" path="/var/lib/kubelet/pods/83d76c67-52ad-480c-9f07-9620f6ed6a42/volumes" Dec 06 05:55:20 crc kubenswrapper[4958]: I1206 05:55:20.445915 4958 generic.go:334] "Generic (PLEG): container finished" podID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerID="5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850" exitCode=1 Dec 06 05:55:20 crc kubenswrapper[4958]: I1206 05:55:20.446063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerDied","Data":"5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850"} Dec 06 05:55:20 crc kubenswrapper[4958]: I1206 05:55:20.446117 4958 scope.go:117] "RemoveContainer" containerID="4616aa5ceafa9d22dad86de24f0a7df5f712f14e053fa799b5b835b94b06efea" Dec 06 05:55:20 crc kubenswrapper[4958]: I1206 05:55:20.447035 4958 scope.go:117] "RemoveContainer" containerID="5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850" Dec 06 05:55:20 crc kubenswrapper[4958]: E1206 05:55:20.447521 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(dcbe2099-3d41-4f69-be20-47d96498cb25)\"" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" Dec 06 05:55:20 crc kubenswrapper[4958]: I1206 05:55:20.450951 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerStarted","Data":"909408dd3cfb2d5d33aa97ce174185081bce41e9df0654a72c6b1fd2b43736d3"} Dec 06 05:55:23 crc kubenswrapper[4958]: I1206 05:55:23.018555 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:23 crc kubenswrapper[4958]: I1206 05:55:23.019269 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:23 crc kubenswrapper[4958]: I1206 05:55:23.019287 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:23 crc kubenswrapper[4958]: I1206 05:55:23.020099 4958 scope.go:117] "RemoveContainer" containerID="5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850" Dec 06 05:55:23 crc kubenswrapper[4958]: E1206 05:55:23.020435 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 10s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(dcbe2099-3d41-4f69-be20-47d96498cb25)\"" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" Dec 06 05:55:26 crc kubenswrapper[4958]: I1206 05:55:26.516577 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerStarted","Data":"11dece826e622429bfc2a8a05c616aa19c8b40e5f512ca7bc93425c6c6eae60c"} Dec 06 05:55:26 crc kubenswrapper[4958]: I1206 05:55:26.517252 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:55:26 crc kubenswrapper[4958]: I1206 05:55:26.539196 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=7.01314106 podStartE2EDuration="1m16.539178133s" podCreationTimestamp="2025-12-06 05:54:10 +0000 UTC" firstStartedPulling="2025-12-06 05:54:16.63413663 +0000 UTC m=+1567.167907383" lastFinishedPulling="2025-12-06 05:55:26.160173693 +0000 UTC m=+1636.693944456" observedRunningTime="2025-12-06 05:55:26.538000122 +0000 UTC m=+1637.071770885" watchObservedRunningTime="2025-12-06 05:55:26.539178133 +0000 UTC m=+1637.072948906" Dec 06 05:55:29 crc kubenswrapper[4958]: I1206 05:55:29.542742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"5e310020-e259-46c5-8928-f587abdf0577","Type":"ContainerStarted","Data":"35a624b120aa9dca96ea98da8eff6234c2c3a55853aadfaa0eec1963ccd534c5"} Dec 06 05:55:29 crc kubenswrapper[4958]: I1206 05:55:29.569666 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=14.319044367 podStartE2EDuration="1m0.569647492s" podCreationTimestamp="2025-12-06 05:54:29 +0000 UTC" firstStartedPulling="2025-12-06 05:54:42.306441072 +0000 UTC m=+1592.840211825" lastFinishedPulling="2025-12-06 05:55:28.557044187 +0000 UTC m=+1639.090814950" observedRunningTime="2025-12-06 05:55:29.561546024 +0000 UTC m=+1640.095316807" watchObservedRunningTime="2025-12-06 05:55:29.569647492 +0000 UTC m=+1640.103418255" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.788908 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-588cbd45c9-xblwx"] Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789348 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789363 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789375 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="extract-content" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789380 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="extract-content" Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789391 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789397 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789411 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="extract-utilities" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789417 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="extract-utilities" Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789430 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="extract-utilities" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789436 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="extract-utilities" Dec 06 05:55:30 crc kubenswrapper[4958]: E1206 05:55:30.789462 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="extract-content" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789488 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="extract-content" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789682 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="83d76c67-52ad-480c-9f07-9620f6ed6a42" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.789698 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4dcd0eed-120b-4aec-bb22-3fabed037aae" containerName="registry-server" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.790648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.792801 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.797174 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.805059 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.825656 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-588cbd45c9-xblwx"] Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894185 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqctv\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-kube-api-access-cqctv\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-combined-ca-bundle\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894331 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-config-data\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894404 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-public-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894465 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-etc-swift\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894526 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-log-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894623 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-run-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.894651 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-internal-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.996443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-run-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.996761 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-internal-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.996898 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqctv\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-kube-api-access-cqctv\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.996980 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-run-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.997147 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-combined-ca-bundle\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.997313 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-config-data\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.997439 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-public-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.997616 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-etc-swift\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.997752 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-log-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:30 crc kubenswrapper[4958]: I1206 05:55:30.998081 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aae69e62-83f7-47d4-aecd-e883ed84a6ac-log-httpd\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.004067 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-public-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.004802 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-config-data\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.008152 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-etc-swift\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.009192 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-internal-tls-certs\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.013690 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aae69e62-83f7-47d4-aecd-e883ed84a6ac-combined-ca-bundle\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.016748 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqctv\" (UniqueName: \"kubernetes.io/projected/aae69e62-83f7-47d4-aecd-e883ed84a6ac-kube-api-access-cqctv\") pod \"swift-proxy-588cbd45c9-xblwx\" (UID: \"aae69e62-83f7-47d4-aecd-e883ed84a6ac\") " pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.119371 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:31 crc kubenswrapper[4958]: I1206 05:55:31.640640 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-588cbd45c9-xblwx"] Dec 06 05:55:32 crc kubenswrapper[4958]: I1206 05:55:32.571715 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588cbd45c9-xblwx" event={"ID":"aae69e62-83f7-47d4-aecd-e883ed84a6ac","Type":"ContainerStarted","Data":"4a9d0f0ac90e678dddad40c86d29dcd3d51232acb3f40c4f8bdbdf448cfc8f0f"} Dec 06 05:55:32 crc kubenswrapper[4958]: I1206 05:55:32.572270 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588cbd45c9-xblwx" event={"ID":"aae69e62-83f7-47d4-aecd-e883ed84a6ac","Type":"ContainerStarted","Data":"9f4caff7076e63fb1f43923846c1fb4a3eb4627951e2cc90d406e4e3aa05d2af"} Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.581739 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f28060d-c759-4b4b-a643-bf8acb76c1b2" containerID="7d2b63b1bf05453d97ce26f361e83bb04e992444900d0ab59480d44b3a6e6148" exitCode=0 Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.581975 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn59h" event={"ID":"7f28060d-c759-4b4b-a643-bf8acb76c1b2","Type":"ContainerDied","Data":"7d2b63b1bf05453d97ce26f361e83bb04e992444900d0ab59480d44b3a6e6148"} Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.586249 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-588cbd45c9-xblwx" event={"ID":"aae69e62-83f7-47d4-aecd-e883ed84a6ac","Type":"ContainerStarted","Data":"6ebb9a045ed8ad0b0c8fcfa046ed394f4d9d3df4141dd19dc6c8dc7af7b37396"} Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.586375 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.586393 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:33 crc kubenswrapper[4958]: I1206 05:55:33.639651 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-588cbd45c9-xblwx" podStartSLOduration=3.639634291 podStartE2EDuration="3.639634291s" podCreationTimestamp="2025-12-06 05:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:33.629395785 +0000 UTC m=+1644.163166548" watchObservedRunningTime="2025-12-06 05:55:33.639634291 +0000 UTC m=+1644.173405044" Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.880858 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn59h" Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.982893 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts\") pod \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983006 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs\") pod \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983132 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t78d6\" (UniqueName: \"kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6\") pod \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983195 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data\") pod \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983225 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle\") pod \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\" (UID: \"7f28060d-c759-4b4b-a643-bf8acb76c1b2\") " Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983424 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs" (OuterVolumeSpecName: "logs") pod "7f28060d-c759-4b4b-a643-bf8acb76c1b2" (UID: "7f28060d-c759-4b4b-a643-bf8acb76c1b2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.983979 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7f28060d-c759-4b4b-a643-bf8acb76c1b2-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.988542 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6" (OuterVolumeSpecName: "kube-api-access-t78d6") pod "7f28060d-c759-4b4b-a643-bf8acb76c1b2" (UID: "7f28060d-c759-4b4b-a643-bf8acb76c1b2"). InnerVolumeSpecName "kube-api-access-t78d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:34 crc kubenswrapper[4958]: I1206 05:55:34.992749 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts" (OuterVolumeSpecName: "scripts") pod "7f28060d-c759-4b4b-a643-bf8acb76c1b2" (UID: "7f28060d-c759-4b4b-a643-bf8acb76c1b2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.010531 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data" (OuterVolumeSpecName: "config-data") pod "7f28060d-c759-4b4b-a643-bf8acb76c1b2" (UID: "7f28060d-c759-4b4b-a643-bf8acb76c1b2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.020349 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7f28060d-c759-4b4b-a643-bf8acb76c1b2" (UID: "7f28060d-c759-4b4b-a643-bf8acb76c1b2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.085636 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.085698 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t78d6\" (UniqueName: \"kubernetes.io/projected/7f28060d-c759-4b4b-a643-bf8acb76c1b2-kube-api-access-t78d6\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.085769 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.085782 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f28060d-c759-4b4b-a643-bf8acb76c1b2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.436905 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.437235 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-central-agent" containerID="cri-o://004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0" gracePeriod=30 Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.437768 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="proxy-httpd" containerID="cri-o://11dece826e622429bfc2a8a05c616aa19c8b40e5f512ca7bc93425c6c6eae60c" gracePeriod=30 Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.437834 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="sg-core" containerID="cri-o://909408dd3cfb2d5d33aa97ce174185081bce41e9df0654a72c6b1fd2b43736d3" gracePeriod=30 Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.437885 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-notification-agent" containerID="cri-o://e5cbd5221e430779ce7691325df6a1c16b3ba0b95d835c7bdb4574f3a49a5aab" gracePeriod=30 Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.606670 4958 generic.go:334] "Generic (PLEG): container finished" podID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerID="909408dd3cfb2d5d33aa97ce174185081bce41e9df0654a72c6b1fd2b43736d3" exitCode=2 Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.606750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerDied","Data":"909408dd3cfb2d5d33aa97ce174185081bce41e9df0654a72c6b1fd2b43736d3"} Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.608962 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-hn59h" event={"ID":"7f28060d-c759-4b4b-a643-bf8acb76c1b2","Type":"ContainerDied","Data":"f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba"} Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.608996 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5ae2b7b1a48ef150d317e2a95ed471ea4cf0d282ff7b3b1225d50a1a5e97aba" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.609009 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-hn59h" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.740899 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-754c8966f6-f7t66"] Dec 06 05:55:35 crc kubenswrapper[4958]: E1206 05:55:35.741378 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f28060d-c759-4b4b-a643-bf8acb76c1b2" containerName="placement-db-sync" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.741398 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f28060d-c759-4b4b-a643-bf8acb76c1b2" containerName="placement-db-sync" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.741755 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f28060d-c759-4b4b-a643-bf8acb76c1b2" containerName="placement-db-sync" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.742952 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.746063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.746063 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.746385 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.746400 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-czzwd" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.749339 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.754115 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754c8966f6-f7t66"] Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.762618 4958 scope.go:117] "RemoveContainer" containerID="5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.797160 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fb4l\" (UniqueName: \"kubernetes.io/projected/796f2f84-aca5-4fb0-8963-fcfd0a851130-kube-api-access-9fb4l\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.797210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-scripts\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.797328 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-combined-ca-bundle\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.797348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796f2f84-aca5-4fb0-8963-fcfd0a851130-logs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.797454 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-internal-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.798027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-config-data\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.798073 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-public-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900013 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fb4l\" (UniqueName: \"kubernetes.io/projected/796f2f84-aca5-4fb0-8963-fcfd0a851130-kube-api-access-9fb4l\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-scripts\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900380 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-combined-ca-bundle\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900410 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796f2f84-aca5-4fb0-8963-fcfd0a851130-logs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-internal-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-config-data\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.900618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-public-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.901393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/796f2f84-aca5-4fb0-8963-fcfd0a851130-logs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.905324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-combined-ca-bundle\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.905496 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-internal-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.905547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-config-data\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.905701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-scripts\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.906146 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/796f2f84-aca5-4fb0-8963-fcfd0a851130-public-tls-certs\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:35 crc kubenswrapper[4958]: I1206 05:55:35.915515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fb4l\" (UniqueName: \"kubernetes.io/projected/796f2f84-aca5-4fb0-8963-fcfd0a851130-kube-api-access-9fb4l\") pod \"placement-754c8966f6-f7t66\" (UID: \"796f2f84-aca5-4fb0-8963-fcfd0a851130\") " pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:36 crc kubenswrapper[4958]: E1206 05:55:36.050854 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6176dbaf_8ccf_41de_8d4b_0b5ac9328569.slice/crio-conmon-004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6176dbaf_8ccf_41de_8d4b_0b5ac9328569.slice/crio-004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.071828 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.130418 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.547134 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-754c8966f6-f7t66"] Dec 06 05:55:36 crc kubenswrapper[4958]: W1206 05:55:36.549032 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod796f2f84_aca5_4fb0_8963_fcfd0a851130.slice/crio-28bbf9ee2e33a98fe152a9b6f15a3069cb4aa17c9630c7f6faf2acb7688718ca WatchSource:0}: Error finding container 28bbf9ee2e33a98fe152a9b6f15a3069cb4aa17c9630c7f6faf2acb7688718ca: Status 404 returned error can't find the container with id 28bbf9ee2e33a98fe152a9b6f15a3069cb4aa17c9630c7f6faf2acb7688718ca Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.618235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754c8966f6-f7t66" event={"ID":"796f2f84-aca5-4fb0-8963-fcfd0a851130","Type":"ContainerStarted","Data":"28bbf9ee2e33a98fe152a9b6f15a3069cb4aa17c9630c7f6faf2acb7688718ca"} Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.620323 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerStarted","Data":"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f"} Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.629414 4958 generic.go:334] "Generic (PLEG): container finished" podID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerID="11dece826e622429bfc2a8a05c616aa19c8b40e5f512ca7bc93425c6c6eae60c" exitCode=0 Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.629546 4958 generic.go:334] "Generic (PLEG): container finished" podID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerID="004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0" exitCode=0 Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.629617 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerDied","Data":"11dece826e622429bfc2a8a05c616aa19c8b40e5f512ca7bc93425c6c6eae60c"} Dec 06 05:55:36 crc kubenswrapper[4958]: I1206 05:55:36.629692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerDied","Data":"004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0"} Dec 06 05:55:37 crc kubenswrapper[4958]: I1206 05:55:37.642095 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754c8966f6-f7t66" event={"ID":"796f2f84-aca5-4fb0-8963-fcfd0a851130","Type":"ContainerStarted","Data":"e0e01d1805fb5a0be141389f2304198500f99f00971220b2da94fe3e1c1bd5b4"} Dec 06 05:55:37 crc kubenswrapper[4958]: I1206 05:55:37.642708 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-754c8966f6-f7t66" event={"ID":"796f2f84-aca5-4fb0-8963-fcfd0a851130","Type":"ContainerStarted","Data":"1f08729f051d6dd56bd1f9f5e63c81ab48ebb6fd568dfff3340c0c9aae199218"} Dec 06 05:55:38 crc kubenswrapper[4958]: I1206 05:55:38.649711 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:38 crc kubenswrapper[4958]: I1206 05:55:38.649756 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:55:38 crc kubenswrapper[4958]: I1206 05:55:38.678252 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-754c8966f6-f7t66" podStartSLOduration=3.678227711 podStartE2EDuration="3.678227711s" podCreationTimestamp="2025-12-06 05:55:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:38.669036405 +0000 UTC m=+1649.202807168" watchObservedRunningTime="2025-12-06 05:55:38.678227711 +0000 UTC m=+1649.211998474" Dec 06 05:55:39 crc kubenswrapper[4958]: I1206 05:55:39.867165 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:55:39 crc kubenswrapper[4958]: I1206 05:55:39.867558 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:55:41 crc kubenswrapper[4958]: I1206 05:55:41.125384 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-588cbd45c9-xblwx" Dec 06 05:55:41 crc kubenswrapper[4958]: I1206 05:55:41.251089 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.174:3000/\": dial tcp 10.217.0.174:3000: connect: connection refused" Dec 06 05:55:42 crc kubenswrapper[4958]: I1206 05:55:42.691290 4958 generic.go:334] "Generic (PLEG): container finished" podID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerID="e5cbd5221e430779ce7691325df6a1c16b3ba0b95d835c7bdb4574f3a49a5aab" exitCode=0 Dec 06 05:55:42 crc kubenswrapper[4958]: I1206 05:55:42.691335 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerDied","Data":"e5cbd5221e430779ce7691325df6a1c16b3ba0b95d835c7bdb4574f3a49a5aab"} Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.018370 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.057252 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.208836 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.334644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335080 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335315 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335361 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335523 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kn9rg\" (UniqueName: \"kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335560 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data\") pod \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\" (UID: \"6176dbaf-8ccf-41de-8d4b-0b5ac9328569\") " Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335576 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.335955 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.336144 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.336166 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.346641 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts" (OuterVolumeSpecName: "scripts") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.346730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg" (OuterVolumeSpecName: "kube-api-access-kn9rg") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "kube-api-access-kn9rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.371558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.411985 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.438022 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.438159 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.438260 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.438327 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kn9rg\" (UniqueName: \"kubernetes.io/projected/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-kube-api-access-kn9rg\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.441314 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data" (OuterVolumeSpecName: "config-data") pod "6176dbaf-8ccf-41de-8d4b-0b5ac9328569" (UID: "6176dbaf-8ccf-41de-8d4b-0b5ac9328569"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.540036 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6176dbaf-8ccf-41de-8d4b-0b5ac9328569-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.705161 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.705186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"6176dbaf-8ccf-41de-8d4b-0b5ac9328569","Type":"ContainerDied","Data":"21296cba31780c1eeac583151ff6fd4e0a612fbad5551a04efaaf17e10d5bfdc"} Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.705233 4958 scope.go:117] "RemoveContainer" containerID="11dece826e622429bfc2a8a05c616aa19c8b40e5f512ca7bc93425c6c6eae60c" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.705449 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.705750 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f is running failed: container process not found" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.706619 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f is running failed: container process not found" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.707633 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f is running failed: container process not found" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" cmd=["/usr/bin/pgrep","-f","-r","DRST","watcher-decision-engine"] Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.707687 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f is running failed: container process not found" probeType="Readiness" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.730290 4958 scope.go:117] "RemoveContainer" containerID="909408dd3cfb2d5d33aa97ce174185081bce41e9df0654a72c6b1fd2b43736d3" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.778904 4958 scope.go:117] "RemoveContainer" containerID="e5cbd5221e430779ce7691325df6a1c16b3ba0b95d835c7bdb4574f3a49a5aab" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.783939 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.791416 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.808889 4958 scope.go:117] "RemoveContainer" containerID="004bb2fd7f87e2e15fac9ea5bf183f7d1f2234ecc5ce5f35eadd3cf7a3ee67a0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.810969 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.812938 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-notification-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.812964 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-notification-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.812998 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-central-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813006 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-central-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.813037 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="proxy-httpd" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813046 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="proxy-httpd" Dec 06 05:55:43 crc kubenswrapper[4958]: E1206 05:55:43.813059 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="sg-core" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813066 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="sg-core" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813318 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-notification-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813349 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="sg-core" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813372 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="proxy-httpd" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.813390 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" containerName="ceilometer-central-agent" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.816785 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.819149 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.819267 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.826777 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.948923 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949010 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949138 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949216 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzbq\" (UniqueName: \"kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:43 crc kubenswrapper[4958]: I1206 05:55:43.949344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.051238 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.051654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.051907 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.052104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.052278 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.052124 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.052399 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.052693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzbq\" (UniqueName: \"kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.053249 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.058054 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.058255 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.059564 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.061149 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.083278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzbq\" (UniqueName: \"kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq\") pod \"ceilometer-0\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.136145 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.580516 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.715646 4958 generic.go:334] "Generic (PLEG): container finished" podID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" exitCode=1 Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.715705 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerDied","Data":"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f"} Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.716040 4958 scope.go:117] "RemoveContainer" containerID="5accb9146183b617214aef75ec6ce06d37c205b60714d23c9c3badf2f7c49850" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.716207 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:55:44 crc kubenswrapper[4958]: E1206 05:55:44.716443 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(dcbe2099-3d41-4f69-be20-47d96498cb25)\"" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" Dec 06 05:55:44 crc kubenswrapper[4958]: I1206 05:55:44.719533 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerStarted","Data":"f9a09bfe6335fe741dc03c28ff2f2218cd40407fbecfbad3a6f7a674d96b4a06"} Dec 06 05:55:45 crc kubenswrapper[4958]: I1206 05:55:45.730248 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerStarted","Data":"2bcd8f82e026dfacf922a88cc7296a59d3b0ded8c305de509071ba3a6dd9f631"} Dec 06 05:55:45 crc kubenswrapper[4958]: I1206 05:55:45.732570 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:55:45 crc kubenswrapper[4958]: E1206 05:55:45.732896 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(dcbe2099-3d41-4f69-be20-47d96498cb25)\"" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" Dec 06 05:55:45 crc kubenswrapper[4958]: I1206 05:55:45.773888 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6176dbaf-8ccf-41de-8d4b-0b5ac9328569" path="/var/lib/kubelet/pods/6176dbaf-8ccf-41de-8d4b-0b5ac9328569/volumes" Dec 06 05:55:46 crc kubenswrapper[4958]: I1206 05:55:46.750677 4958 generic.go:334] "Generic (PLEG): container finished" podID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" containerID="cddddc08f5e105cd63270d231de952de1e1f69bce1a7d233bef7a08f7d99bec4" exitCode=0 Dec 06 05:55:46 crc kubenswrapper[4958]: I1206 05:55:46.751224 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znqzx" event={"ID":"94a6d712-4bb0-458b-878a-99dd8d47a8f9","Type":"ContainerDied","Data":"cddddc08f5e105cd63270d231de952de1e1f69bce1a7d233bef7a08f7d99bec4"} Dec 06 05:55:46 crc kubenswrapper[4958]: I1206 05:55:46.753801 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerStarted","Data":"3bfa7e1eca04947d840f735dfe050884f6a62c3136420f2e3fc425ab403d10e2"} Dec 06 05:55:47 crc kubenswrapper[4958]: I1206 05:55:47.798098 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerStarted","Data":"3a5f0559b455759c28bff87006d56b761feade8078f5f08a0f17459752f95f7b"} Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.155053 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znqzx" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.236314 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data\") pod \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.236404 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle\") pod \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.236513 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4q92\" (UniqueName: \"kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92\") pod \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\" (UID: \"94a6d712-4bb0-458b-878a-99dd8d47a8f9\") " Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.246749 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92" (OuterVolumeSpecName: "kube-api-access-q4q92") pod "94a6d712-4bb0-458b-878a-99dd8d47a8f9" (UID: "94a6d712-4bb0-458b-878a-99dd8d47a8f9"). InnerVolumeSpecName "kube-api-access-q4q92". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.247061 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "94a6d712-4bb0-458b-878a-99dd8d47a8f9" (UID: "94a6d712-4bb0-458b-878a-99dd8d47a8f9"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.290566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94a6d712-4bb0-458b-878a-99dd8d47a8f9" (UID: "94a6d712-4bb0-458b-878a-99dd8d47a8f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.338854 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.339168 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94a6d712-4bb0-458b-878a-99dd8d47a8f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.339227 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4q92\" (UniqueName: \"kubernetes.io/projected/94a6d712-4bb0-458b-878a-99dd8d47a8f9-kube-api-access-q4q92\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.808413 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-znqzx" Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.812808 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-znqzx" event={"ID":"94a6d712-4bb0-458b-878a-99dd8d47a8f9","Type":"ContainerDied","Data":"31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f"} Dec 06 05:55:48 crc kubenswrapper[4958]: I1206 05:55:48.813015 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31ac9df547dac43f420871ee6844f754c7a12632497d4e329d342f1c589ea52f" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.094730 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-68ff9f8f57-qzqld"] Dec 06 05:55:49 crc kubenswrapper[4958]: E1206 05:55:49.095277 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" containerName="barbican-db-sync" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.095286 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" containerName="barbican-db-sync" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.095480 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" containerName="barbican-db-sync" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.096334 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.100465 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-55pkr" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.100483 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.109202 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.123612 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68ff9f8f57-qzqld"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.141542 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-575d9fc686-xrcwg"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.143605 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.145743 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153210 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bv9m\" (UniqueName: \"kubernetes.io/projected/19b32164-d135-4d2b-9f69-bf4f1c986fa5-kube-api-access-5bv9m\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-combined-ca-bundle\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153750 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data-custom\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153899 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.153954 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-combined-ca-bundle\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.154006 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data-custom\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.154170 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df9cb8-058d-4f26-8444-808fd8fd554c-logs\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.154255 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b32164-d135-4d2b-9f69-bf4f1c986fa5-logs\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.154284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxlg\" (UniqueName: \"kubernetes.io/projected/b7df9cb8-058d-4f26-8444-808fd8fd554c-kube-api-access-kgxlg\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.164161 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575d9fc686-xrcwg"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.198531 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.200290 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.221930 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256222 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhm2\" (UniqueName: \"kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256283 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256314 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bv9m\" (UniqueName: \"kubernetes.io/projected/19b32164-d135-4d2b-9f69-bf4f1c986fa5-kube-api-access-5bv9m\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256385 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256560 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-combined-ca-bundle\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data-custom\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256789 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256815 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256839 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256883 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-combined-ca-bundle\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.256958 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data-custom\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.257009 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df9cb8-058d-4f26-8444-808fd8fd554c-logs\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.257068 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b32164-d135-4d2b-9f69-bf4f1c986fa5-logs\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.257098 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxlg\" (UniqueName: \"kubernetes.io/projected/b7df9cb8-058d-4f26-8444-808fd8fd554c-kube-api-access-kgxlg\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.257113 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.257733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7df9cb8-058d-4f26-8444-808fd8fd554c-logs\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.260435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-combined-ca-bundle\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.260836 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/19b32164-d135-4d2b-9f69-bf4f1c986fa5-logs\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.262273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.262365 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data-custom\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.265983 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-combined-ca-bundle\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.266718 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/19b32164-d135-4d2b-9f69-bf4f1c986fa5-config-data\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.277157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7df9cb8-058d-4f26-8444-808fd8fd554c-config-data-custom\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.278548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bv9m\" (UniqueName: \"kubernetes.io/projected/19b32164-d135-4d2b-9f69-bf4f1c986fa5-kube-api-access-5bv9m\") pod \"barbican-keystone-listener-575d9fc686-xrcwg\" (UID: \"19b32164-d135-4d2b-9f69-bf4f1c986fa5\") " pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.287912 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxlg\" (UniqueName: \"kubernetes.io/projected/b7df9cb8-058d-4f26-8444-808fd8fd554c-kube-api-access-kgxlg\") pod \"barbican-worker-68ff9f8f57-qzqld\" (UID: \"b7df9cb8-058d-4f26-8444-808fd8fd554c\") " pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhm2\" (UniqueName: \"kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359236 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359326 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359358 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359404 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.359489 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.360548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.360805 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.361066 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.361886 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.361929 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.392401 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhm2\" (UniqueName: \"kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2\") pod \"dnsmasq-dns-549c96b4c7-gnwjx\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.396285 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.398258 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.402105 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.420650 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.430588 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-68ff9f8f57-qzqld" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.473869 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.518889 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.566280 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.566380 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9c6k\" (UniqueName: \"kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.566402 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.566452 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.568295 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.669764 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.669860 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.669956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9c6k\" (UniqueName: \"kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.669980 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.670044 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.671260 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.677895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.680717 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.681163 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.702421 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9c6k\" (UniqueName: \"kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k\") pod \"barbican-api-76c7c6dc4b-9jzd4\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:49 crc kubenswrapper[4958]: I1206 05:55:49.860952 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:50 crc kubenswrapper[4958]: W1206 05:55:50.004823 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7df9cb8_058d_4f26_8444_808fd8fd554c.slice/crio-3887df1c905df658f9dbeea3c6f7fedf524a5745bf76de46a0e81430edc606ab WatchSource:0}: Error finding container 3887df1c905df658f9dbeea3c6f7fedf524a5745bf76de46a0e81430edc606ab: Status 404 returned error can't find the container with id 3887df1c905df658f9dbeea3c6f7fedf524a5745bf76de46a0e81430edc606ab Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.005950 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-68ff9f8f57-qzqld"] Dec 06 05:55:50 crc kubenswrapper[4958]: W1206 05:55:50.137519 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19b32164_d135_4d2b_9f69_bf4f1c986fa5.slice/crio-78dda90a53e470cb74b81cbdd0c1a3037db18896de857acaf486f33df9c0c632 WatchSource:0}: Error finding container 78dda90a53e470cb74b81cbdd0c1a3037db18896de857acaf486f33df9c0c632: Status 404 returned error can't find the container with id 78dda90a53e470cb74b81cbdd0c1a3037db18896de857acaf486f33df9c0c632 Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.139589 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-575d9fc686-xrcwg"] Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.334955 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:55:50 crc kubenswrapper[4958]: W1206 05:55:50.341028 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8066f868_45e9_4e89_a9ea_e1e269f19696.slice/crio-8114dd39a3da3922363b48d88365b53841cb02137fa1efb152f999c67f0e45b4 WatchSource:0}: Error finding container 8114dd39a3da3922363b48d88365b53841cb02137fa1efb152f999c67f0e45b4: Status 404 returned error can't find the container with id 8114dd39a3da3922363b48d88365b53841cb02137fa1efb152f999c67f0e45b4 Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.529832 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.837757 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerStarted","Data":"adbbea2393542cbb683d23f4fc158b140a1bfe6bf001a9a8962381473d55fc6e"} Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.838840 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" event={"ID":"8066f868-45e9-4e89-a9ea-e1e269f19696","Type":"ContainerStarted","Data":"8114dd39a3da3922363b48d88365b53841cb02137fa1efb152f999c67f0e45b4"} Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.839916 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68ff9f8f57-qzqld" event={"ID":"b7df9cb8-058d-4f26-8444-808fd8fd554c","Type":"ContainerStarted","Data":"3887df1c905df658f9dbeea3c6f7fedf524a5745bf76de46a0e81430edc606ab"} Dec 06 05:55:50 crc kubenswrapper[4958]: I1206 05:55:50.840836 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" event={"ID":"19b32164-d135-4d2b-9f69-bf4f1c986fa5","Type":"ContainerStarted","Data":"78dda90a53e470cb74b81cbdd0c1a3037db18896de857acaf486f33df9c0c632"} Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.572001 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-m2lhs"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.574410 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.584996 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m2lhs"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.660988 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-6hdqq"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.663242 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.668659 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hdqq"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.728441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.728596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.775984 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-x68cs"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.777366 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.781672 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x68cs"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.832381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wwrv\" (UniqueName: \"kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.832498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.832530 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.832665 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.833852 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.862184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n\") pod \"nova-api-db-create-m2lhs\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.873323 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-02e5-account-create-update-c75bh"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.874857 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.880826 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.881927 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerStarted","Data":"61e9d68ae142b9b30cbeb599b86a38d0919e127050d42f3b236d464eed01b144"} Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.882198 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.885325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerStarted","Data":"a47f761466377e6fe42c23e78bba0822d5a1f0e659b7c891dc1c3b002a6a246b"} Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.897520 4958 generic.go:334] "Generic (PLEG): container finished" podID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerID="4ff3ee395fd1958f055bea3af91132905b3b93c06945682d2406d1b2d6b71bd9" exitCode=0 Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.897566 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" event={"ID":"8066f868-45e9-4e89-a9ea-e1e269f19696","Type":"ContainerDied","Data":"4ff3ee395fd1958f055bea3af91132905b3b93c06945682d2406d1b2d6b71bd9"} Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.902533 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02e5-account-create-update-c75bh"] Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.908762 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.934523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wwrv\" (UniqueName: \"kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.934577 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.934628 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.934652 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bhgw\" (UniqueName: \"kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.935835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.982425 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wwrv\" (UniqueName: \"kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv\") pod \"nova-cell0-db-create-6hdqq\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:51 crc kubenswrapper[4958]: I1206 05:55:51.995943 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.257680345 podStartE2EDuration="8.99592258s" podCreationTimestamp="2025-12-06 05:55:43 +0000 UTC" firstStartedPulling="2025-12-06 05:55:44.583961937 +0000 UTC m=+1655.117732710" lastFinishedPulling="2025-12-06 05:55:49.322204182 +0000 UTC m=+1659.855974945" observedRunningTime="2025-12-06 05:55:51.947396805 +0000 UTC m=+1662.481167568" watchObservedRunningTime="2025-12-06 05:55:51.99592258 +0000 UTC m=+1662.529693343" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.038921 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.039122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.039172 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bhgw\" (UniqueName: \"kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.039233 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9mwr\" (UniqueName: \"kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.040621 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.069006 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-d745-account-create-update-nn9n6"] Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.070616 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.077783 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.085964 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bhgw\" (UniqueName: \"kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw\") pod \"nova-cell1-db-create-x68cs\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.091224 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d745-account-create-update-nn9n6"] Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.104007 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.142496 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9mwr\" (UniqueName: \"kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.142634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.143341 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.174917 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9mwr\" (UniqueName: \"kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr\") pod \"nova-api-02e5-account-create-update-c75bh\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.244103 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j98ck\" (UniqueName: \"kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.244159 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.288991 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.289051 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.332396 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7249-account-create-update-bw52q"] Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.334016 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.336301 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.346487 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.346684 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j98ck\" (UniqueName: \"kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.347580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.357338 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7249-account-create-update-bw52q"] Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.389121 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j98ck\" (UniqueName: \"kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck\") pod \"nova-cell0-d745-account-create-update-nn9n6\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.404211 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.448184 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.448392 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwzns\" (UniqueName: \"kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.461942 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-m2lhs"] Dec 06 05:55:52 crc kubenswrapper[4958]: W1206 05:55:52.478610 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76d9f5a2_d6d4_4390_965a_b5d29b134dc1.slice/crio-92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92 WatchSource:0}: Error finding container 92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92: Status 404 returned error can't find the container with id 92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92 Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.573177 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwzns\" (UniqueName: \"kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.573665 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.580684 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.605213 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwzns\" (UniqueName: \"kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns\") pod \"nova-cell1-7249-account-create-update-bw52q\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.675085 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.759668 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-x68cs"] Dec 06 05:55:52 crc kubenswrapper[4958]: W1206 05:55:52.773265 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod48ba018f_da06_4772_835f_257269ffed57.slice/crio-4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0 WatchSource:0}: Error finding container 4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0: Status 404 returned error can't find the container with id 4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0 Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.907830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m2lhs" event={"ID":"76d9f5a2-d6d4-4390-965a-b5d29b134dc1","Type":"ContainerStarted","Data":"92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92"} Dec 06 05:55:52 crc kubenswrapper[4958]: I1206 05:55:52.908989 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x68cs" event={"ID":"48ba018f-da06-4772-835f-257269ffed57","Type":"ContainerStarted","Data":"4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.000804 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-02e5-account-create-update-c75bh"] Dec 06 05:55:53 crc kubenswrapper[4958]: W1206 05:55:53.001289 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51b06b73_d14d_47de_84f1_feae2b3a1c9d.slice/crio-d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca WatchSource:0}: Error finding container d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca: Status 404 returned error can't find the container with id d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.018047 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.018104 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.018855 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:55:53 crc kubenswrapper[4958]: E1206 05:55:53.019129 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"watcher-decision-engine\" with CrashLoopBackOff: \"back-off 20s restarting failed container=watcher-decision-engine pod=watcher-decision-engine-0_openstack(dcbe2099-3d41-4f69-be20-47d96498cb25)\"" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.029870 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-d745-account-create-update-nn9n6"] Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.046302 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-6hdqq"] Dec 06 05:55:53 crc kubenswrapper[4958]: W1206 05:55:53.049096 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1602702_6e01_4d7b_ba0e_dc3dcc22b6ab.slice/crio-dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96 WatchSource:0}: Error finding container dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96: Status 404 returned error can't find the container with id dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96 Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.237234 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-85d57ddd5d-pth8g"] Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.238952 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.241880 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.243662 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.273530 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85d57ddd5d-pth8g"] Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318438 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7770f7e-3112-4c47-8631-a19d269c3ffc-logs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318501 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-public-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318536 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdrgx\" (UniqueName: \"kubernetes.io/projected/b7770f7e-3112-4c47-8631-a19d269c3ffc-kube-api-access-sdrgx\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-combined-ca-bundle\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318577 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-internal-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.318631 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data-custom\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420492 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7770f7e-3112-4c47-8631-a19d269c3ffc-logs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-public-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420555 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdrgx\" (UniqueName: \"kubernetes.io/projected/b7770f7e-3112-4c47-8631-a19d269c3ffc-kube-api-access-sdrgx\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420574 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-combined-ca-bundle\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420597 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-internal-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.420646 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data-custom\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.426011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b7770f7e-3112-4c47-8631-a19d269c3ffc-logs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.458399 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-public-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.462064 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-combined-ca-bundle\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.462741 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.463105 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-config-data-custom\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.463441 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b7770f7e-3112-4c47-8631-a19d269c3ffc-internal-tls-certs\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.469493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdrgx\" (UniqueName: \"kubernetes.io/projected/b7770f7e-3112-4c47-8631-a19d269c3ffc-kube-api-access-sdrgx\") pod \"barbican-api-85d57ddd5d-pth8g\" (UID: \"b7770f7e-3112-4c47-8631-a19d269c3ffc\") " pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.558938 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.951830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x68cs" event={"ID":"48ba018f-da06-4772-835f-257269ffed57","Type":"ContainerStarted","Data":"12492a359b84a81d1d70381059e12ee74b7ca073c393f8a2dd0c93b82fed3506"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.973824 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" event={"ID":"8066f868-45e9-4e89-a9ea-e1e269f19696","Type":"ContainerStarted","Data":"c875f65c6d69d8a7bff1222c5a1842c71bdc83ff81052f08b01b31fbf735550f"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.974783 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.983744 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" event={"ID":"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab","Type":"ContainerStarted","Data":"9e2f9575aab360650618540ed4ad74b301812a33fa9f5ed553ac84c13b344270"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.983979 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" event={"ID":"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab","Type":"ContainerStarted","Data":"dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.986077 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hdqq" event={"ID":"8e23f416-b2db-49d9-8a31-f68d32ff9b51","Type":"ContainerStarted","Data":"f1339a33b0b4b7caede7fbe80382f194526f94dc0db2eaf4bfc6d2f79edf19ca"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.986219 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hdqq" event={"ID":"8e23f416-b2db-49d9-8a31-f68d32ff9b51","Type":"ContainerStarted","Data":"be43f0ae09e2f6ffa0b60db875de9b017ca1bdbc0d1148678040fb8ed4aee6cd"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.989560 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-x68cs" podStartSLOduration=2.989541901 podStartE2EDuration="2.989541901s" podCreationTimestamp="2025-12-06 05:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:53.968463945 +0000 UTC m=+1664.502234708" watchObservedRunningTime="2025-12-06 05:55:53.989541901 +0000 UTC m=+1664.523312664" Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.990726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02e5-account-create-update-c75bh" event={"ID":"51b06b73-d14d-47de-84f1-feae2b3a1c9d","Type":"ContainerStarted","Data":"faf28f5f435a469ec7a4f687e61fd7c193ea9907cb3499c1d0d6a9849c2d47fc"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.990855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02e5-account-create-update-c75bh" event={"ID":"51b06b73-d14d-47de-84f1-feae2b3a1c9d","Type":"ContainerStarted","Data":"d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca"} Dec 06 05:55:53 crc kubenswrapper[4958]: I1206 05:55:53.999821 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerStarted","Data":"d4bf75dfb957bd22b663ebd59e2e2b74ee4d61ca34eff55b54c822e146631dc2"} Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.000668 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.000703 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.003545 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m2lhs" event={"ID":"76d9f5a2-d6d4-4390-965a-b5d29b134dc1","Type":"ContainerStarted","Data":"ac3b752366f04d66b5e3823c17b5bd6eba379760acf58e23ff30951e5465504c"} Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.018081 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7249-account-create-update-bw52q"] Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.039701 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" podStartSLOduration=5.03968434 podStartE2EDuration="5.03968434s" podCreationTimestamp="2025-12-06 05:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:53.998996055 +0000 UTC m=+1664.532766818" watchObservedRunningTime="2025-12-06 05:55:54.03968434 +0000 UTC m=+1664.573455103" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.058043 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-6hdqq" podStartSLOduration=3.058027553 podStartE2EDuration="3.058027553s" podCreationTimestamp="2025-12-06 05:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:54.014025219 +0000 UTC m=+1664.547795982" watchObservedRunningTime="2025-12-06 05:55:54.058027553 +0000 UTC m=+1664.591798316" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.062214 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-02e5-account-create-update-c75bh" podStartSLOduration=3.062200635 podStartE2EDuration="3.062200635s" podCreationTimestamp="2025-12-06 05:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:54.026848054 +0000 UTC m=+1664.560618817" watchObservedRunningTime="2025-12-06 05:55:54.062200635 +0000 UTC m=+1664.595971398" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.100391 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" podStartSLOduration=2.100367041 podStartE2EDuration="2.100367041s" podCreationTimestamp="2025-12-06 05:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:54.048573419 +0000 UTC m=+1664.582344182" watchObservedRunningTime="2025-12-06 05:55:54.100367041 +0000 UTC m=+1664.634137804" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.110326 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" podStartSLOduration=5.110303278 podStartE2EDuration="5.110303278s" podCreationTimestamp="2025-12-06 05:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:54.06722371 +0000 UTC m=+1664.600994483" watchObservedRunningTime="2025-12-06 05:55:54.110303278 +0000 UTC m=+1664.644074051" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.124833 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-m2lhs" podStartSLOduration=3.124809068 podStartE2EDuration="3.124809068s" podCreationTimestamp="2025-12-06 05:55:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:54.084670019 +0000 UTC m=+1664.618440782" watchObservedRunningTime="2025-12-06 05:55:54.124809068 +0000 UTC m=+1664.658579831" Dec 06 05:55:54 crc kubenswrapper[4958]: I1206 05:55:54.226532 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-85d57ddd5d-pth8g"] Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.014494 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7249-account-create-update-bw52q" event={"ID":"4a9cb1a1-4312-4bbf-b731-de283fd78834","Type":"ContainerStarted","Data":"ef24e7e3f2b70cd10319956873b0aa54c98819acdcd1be9bc6ff169701f02cae"} Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.164952 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.165430 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-central-agent" containerID="cri-o://2bcd8f82e026dfacf922a88cc7296a59d3b0ded8c305de509071ba3a6dd9f631" gracePeriod=30 Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.165930 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="proxy-httpd" containerID="cri-o://61e9d68ae142b9b30cbeb599b86a38d0919e127050d42f3b236d464eed01b144" gracePeriod=30 Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.166005 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="sg-core" containerID="cri-o://3a5f0559b455759c28bff87006d56b761feade8078f5f08a0f17459752f95f7b" gracePeriod=30 Dec 06 05:55:55 crc kubenswrapper[4958]: I1206 05:55:55.166003 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-notification-agent" containerID="cri-o://3bfa7e1eca04947d840f735dfe050884f6a62c3136420f2e3fc425ab403d10e2" gracePeriod=30 Dec 06 05:55:55 crc kubenswrapper[4958]: W1206 05:55:55.804101 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7770f7e_3112_4c47_8631_a19d269c3ffc.slice/crio-5fb5cb70151f2ee1847b898ba5a921099933106c93a9f5dbdca6c7a010b66267 WatchSource:0}: Error finding container 5fb5cb70151f2ee1847b898ba5a921099933106c93a9f5dbdca6c7a010b66267: Status 404 returned error can't find the container with id 5fb5cb70151f2ee1847b898ba5a921099933106c93a9f5dbdca6c7a010b66267 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.026903 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerID="61e9d68ae142b9b30cbeb599b86a38d0919e127050d42f3b236d464eed01b144" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.026944 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerID="3a5f0559b455759c28bff87006d56b761feade8078f5f08a0f17459752f95f7b" exitCode=2 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.026955 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerID="3bfa7e1eca04947d840f735dfe050884f6a62c3136420f2e3fc425ab403d10e2" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.026965 4958 generic.go:334] "Generic (PLEG): container finished" podID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerID="2bcd8f82e026dfacf922a88cc7296a59d3b0ded8c305de509071ba3a6dd9f631" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.026955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerDied","Data":"61e9d68ae142b9b30cbeb599b86a38d0919e127050d42f3b236d464eed01b144"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.027000 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerDied","Data":"3a5f0559b455759c28bff87006d56b761feade8078f5f08a0f17459752f95f7b"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.027014 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerDied","Data":"3bfa7e1eca04947d840f735dfe050884f6a62c3136420f2e3fc425ab403d10e2"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.027026 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerDied","Data":"2bcd8f82e026dfacf922a88cc7296a59d3b0ded8c305de509071ba3a6dd9f631"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.028840 4958 generic.go:334] "Generic (PLEG): container finished" podID="8e23f416-b2db-49d9-8a31-f68d32ff9b51" containerID="f1339a33b0b4b7caede7fbe80382f194526f94dc0db2eaf4bfc6d2f79edf19ca" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.028880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hdqq" event={"ID":"8e23f416-b2db-49d9-8a31-f68d32ff9b51","Type":"ContainerDied","Data":"f1339a33b0b4b7caede7fbe80382f194526f94dc0db2eaf4bfc6d2f79edf19ca"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.030854 4958 generic.go:334] "Generic (PLEG): container finished" podID="76d9f5a2-d6d4-4390-965a-b5d29b134dc1" containerID="ac3b752366f04d66b5e3823c17b5bd6eba379760acf58e23ff30951e5465504c" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.030952 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m2lhs" event={"ID":"76d9f5a2-d6d4-4390-965a-b5d29b134dc1","Type":"ContainerDied","Data":"ac3b752366f04d66b5e3823c17b5bd6eba379760acf58e23ff30951e5465504c"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.033147 4958 generic.go:334] "Generic (PLEG): container finished" podID="48ba018f-da06-4772-835f-257269ffed57" containerID="12492a359b84a81d1d70381059e12ee74b7ca073c393f8a2dd0c93b82fed3506" exitCode=0 Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.033234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x68cs" event={"ID":"48ba018f-da06-4772-835f-257269ffed57","Type":"ContainerDied","Data":"12492a359b84a81d1d70381059e12ee74b7ca073c393f8a2dd0c93b82fed3506"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.035048 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85d57ddd5d-pth8g" event={"ID":"b7770f7e-3112-4c47-8631-a19d269c3ffc","Type":"ContainerStarted","Data":"5fb5cb70151f2ee1847b898ba5a921099933106c93a9f5dbdca6c7a010b66267"} Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.539013 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705243 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705299 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705327 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705377 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brzbq\" (UniqueName: \"kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705395 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705453 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705487 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml\") pod \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\" (UID: \"5a1295d8-a29c-4393-aea8-d2a7323e26d6\") " Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.705926 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.706335 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.708637 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts" (OuterVolumeSpecName: "scripts") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.711524 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq" (OuterVolumeSpecName: "kube-api-access-brzbq") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "kube-api-access-brzbq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.740706 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.808753 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.808780 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.808788 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.808797 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brzbq\" (UniqueName: \"kubernetes.io/projected/5a1295d8-a29c-4393-aea8-d2a7323e26d6-kube-api-access-brzbq\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.808808 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5a1295d8-a29c-4393-aea8-d2a7323e26d6-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.838768 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.907156 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data" (OuterVolumeSpecName: "config-data") pod "5a1295d8-a29c-4393-aea8-d2a7323e26d6" (UID: "5a1295d8-a29c-4393-aea8-d2a7323e26d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.911003 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:56 crc kubenswrapper[4958]: I1206 05:55:56.911026 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5a1295d8-a29c-4393-aea8-d2a7323e26d6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.045213 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68ff9f8f57-qzqld" event={"ID":"b7df9cb8-058d-4f26-8444-808fd8fd554c","Type":"ContainerStarted","Data":"11470cfe44001ce60209587842eecb0970ca1616331dbac7fa2a9558c6977c6a"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.045268 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-68ff9f8f57-qzqld" event={"ID":"b7df9cb8-058d-4f26-8444-808fd8fd554c","Type":"ContainerStarted","Data":"507e24357907ca724ca415999888e4b64c4f5326be71f8b119b425a32551d0f6"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.047558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" event={"ID":"19b32164-d135-4d2b-9f69-bf4f1c986fa5","Type":"ContainerStarted","Data":"148b2dc27aa5ef6a212850637be67fcf281612b3103dad61799d7858b8c6bb15"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.047594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" event={"ID":"19b32164-d135-4d2b-9f69-bf4f1c986fa5","Type":"ContainerStarted","Data":"81df19a9fc24b16ce819833b116a68a46bfeead03ffe75a13c73f3504160c52d"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.050242 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85d57ddd5d-pth8g" event={"ID":"b7770f7e-3112-4c47-8631-a19d269c3ffc","Type":"ContainerStarted","Data":"bac38279c59ee1f691c309de7bcb770b83eaabcaf9bada66d5fb93e2260ad359"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.050269 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-85d57ddd5d-pth8g" event={"ID":"b7770f7e-3112-4c47-8631-a19d269c3ffc","Type":"ContainerStarted","Data":"defd1d5f7e90ee1dec1da2c58765229e3833bcdab15bd39bab91fc3c6a6575f9"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.050759 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.050783 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.052189 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7249-account-create-update-bw52q" event={"ID":"4a9cb1a1-4312-4bbf-b731-de283fd78834","Type":"ContainerStarted","Data":"bac26e11f92b1a749db0c8aaeb3924a7ddc85e6689f47a46dedc5e75fd102321"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.055044 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.055549 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5a1295d8-a29c-4393-aea8-d2a7323e26d6","Type":"ContainerDied","Data":"f9a09bfe6335fe741dc03c28ff2f2218cd40407fbecfbad3a6f7a674d96b4a06"} Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.055622 4958 scope.go:117] "RemoveContainer" containerID="61e9d68ae142b9b30cbeb599b86a38d0919e127050d42f3b236d464eed01b144" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.069532 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-68ff9f8f57-qzqld" podStartSLOduration=1.96680768 podStartE2EDuration="8.069516031s" podCreationTimestamp="2025-12-06 05:55:49 +0000 UTC" firstStartedPulling="2025-12-06 05:55:50.007362844 +0000 UTC m=+1660.541133607" lastFinishedPulling="2025-12-06 05:55:56.110071195 +0000 UTC m=+1666.643841958" observedRunningTime="2025-12-06 05:55:57.060271163 +0000 UTC m=+1667.594041926" watchObservedRunningTime="2025-12-06 05:55:57.069516031 +0000 UTC m=+1667.603286794" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.091237 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-575d9fc686-xrcwg" podStartSLOduration=2.116221318 podStartE2EDuration="8.091222065s" podCreationTimestamp="2025-12-06 05:55:49 +0000 UTC" firstStartedPulling="2025-12-06 05:55:50.145122218 +0000 UTC m=+1660.678892981" lastFinishedPulling="2025-12-06 05:55:56.120122965 +0000 UTC m=+1666.653893728" observedRunningTime="2025-12-06 05:55:57.085109831 +0000 UTC m=+1667.618880604" watchObservedRunningTime="2025-12-06 05:55:57.091222065 +0000 UTC m=+1667.624992828" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.120298 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-85d57ddd5d-pth8g" podStartSLOduration=4.120282916 podStartE2EDuration="4.120282916s" podCreationTimestamp="2025-12-06 05:55:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:57.119652409 +0000 UTC m=+1667.653423172" watchObservedRunningTime="2025-12-06 05:55:57.120282916 +0000 UTC m=+1667.654053679" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.144557 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7249-account-create-update-bw52q" podStartSLOduration=5.144507277 podStartE2EDuration="5.144507277s" podCreationTimestamp="2025-12-06 05:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:55:57.141009164 +0000 UTC m=+1667.674779927" watchObservedRunningTime="2025-12-06 05:55:57.144507277 +0000 UTC m=+1667.678278050" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.157336 4958 scope.go:117] "RemoveContainer" containerID="3a5f0559b455759c28bff87006d56b761feade8078f5f08a0f17459752f95f7b" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.188535 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.244026 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.267991 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.268754 4958 scope.go:117] "RemoveContainer" containerID="3bfa7e1eca04947d840f735dfe050884f6a62c3136420f2e3fc425ab403d10e2" Dec 06 05:55:57 crc kubenswrapper[4958]: E1206 05:55:57.269852 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="proxy-httpd" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.269884 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="proxy-httpd" Dec 06 05:55:57 crc kubenswrapper[4958]: E1206 05:55:57.269926 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-notification-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.269939 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-notification-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: E1206 05:55:57.269961 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="sg-core" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.269969 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="sg-core" Dec 06 05:55:57 crc kubenswrapper[4958]: E1206 05:55:57.269989 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-central-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.269997 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-central-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.270518 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="sg-core" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.270552 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-central-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.270578 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="proxy-httpd" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.270608 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" containerName="ceilometer-notification-agent" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.279340 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.279468 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.289072 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.293537 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.333375 4958 scope.go:117] "RemoveContainer" containerID="2bcd8f82e026dfacf922a88cc7296a59d3b0ded8c305de509071ba3a6dd9f631" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346159 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346288 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346327 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346353 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.346384 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrhm6\" (UniqueName: \"kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453608 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453682 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453715 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453748 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrhm6\" (UniqueName: \"kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453819 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453852 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.453874 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.454272 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.454937 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.462690 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.462810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.463021 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.463868 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.480110 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrhm6\" (UniqueName: \"kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6\") pod \"ceilometer-0\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.615318 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.780056 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.782013 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a1295d8-a29c-4393-aea8-d2a7323e26d6" path="/var/lib/kubelet/pods/5a1295d8-a29c-4393-aea8-d2a7323e26d6/volumes" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.832917 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.838179 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863321 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bhgw\" (UniqueName: \"kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw\") pod \"48ba018f-da06-4772-835f-257269ffed57\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863443 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts\") pod \"48ba018f-da06-4772-835f-257269ffed57\" (UID: \"48ba018f-da06-4772-835f-257269ffed57\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863506 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wwrv\" (UniqueName: \"kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv\") pod \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863554 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n\") pod \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863637 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts\") pod \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\" (UID: \"8e23f416-b2db-49d9-8a31-f68d32ff9b51\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.863678 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts\") pod \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\" (UID: \"76d9f5a2-d6d4-4390-965a-b5d29b134dc1\") " Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.865988 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8e23f416-b2db-49d9-8a31-f68d32ff9b51" (UID: "8e23f416-b2db-49d9-8a31-f68d32ff9b51"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.866173 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "48ba018f-da06-4772-835f-257269ffed57" (UID: "48ba018f-da06-4772-835f-257269ffed57"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.866374 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76d9f5a2-d6d4-4390-965a-b5d29b134dc1" (UID: "76d9f5a2-d6d4-4390-965a-b5d29b134dc1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.871621 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n" (OuterVolumeSpecName: "kube-api-access-dd66n") pod "76d9f5a2-d6d4-4390-965a-b5d29b134dc1" (UID: "76d9f5a2-d6d4-4390-965a-b5d29b134dc1"). InnerVolumeSpecName "kube-api-access-dd66n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.887088 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv" (OuterVolumeSpecName: "kube-api-access-8wwrv") pod "8e23f416-b2db-49d9-8a31-f68d32ff9b51" (UID: "8e23f416-b2db-49d9-8a31-f68d32ff9b51"). InnerVolumeSpecName "kube-api-access-8wwrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.887182 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw" (OuterVolumeSpecName: "kube-api-access-9bhgw") pod "48ba018f-da06-4772-835f-257269ffed57" (UID: "48ba018f-da06-4772-835f-257269ffed57"). InnerVolumeSpecName "kube-api-access-9bhgw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966868 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bhgw\" (UniqueName: \"kubernetes.io/projected/48ba018f-da06-4772-835f-257269ffed57-kube-api-access-9bhgw\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966909 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/48ba018f-da06-4772-835f-257269ffed57-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966921 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wwrv\" (UniqueName: \"kubernetes.io/projected/8e23f416-b2db-49d9-8a31-f68d32ff9b51-kube-api-access-8wwrv\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966934 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dd66n\" (UniqueName: \"kubernetes.io/projected/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-kube-api-access-dd66n\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966949 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8e23f416-b2db-49d9-8a31-f68d32ff9b51-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:57 crc kubenswrapper[4958]: I1206 05:55:57.966961 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76d9f5a2-d6d4-4390-965a-b5d29b134dc1-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.123082 4958 generic.go:334] "Generic (PLEG): container finished" podID="4a9cb1a1-4312-4bbf-b731-de283fd78834" containerID="bac26e11f92b1a749db0c8aaeb3924a7ddc85e6689f47a46dedc5e75fd102321" exitCode=0 Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.123159 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7249-account-create-update-bw52q" event={"ID":"4a9cb1a1-4312-4bbf-b731-de283fd78834","Type":"ContainerDied","Data":"bac26e11f92b1a749db0c8aaeb3924a7ddc85e6689f47a46dedc5e75fd102321"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.174716 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-6hdqq" event={"ID":"8e23f416-b2db-49d9-8a31-f68d32ff9b51","Type":"ContainerDied","Data":"be43f0ae09e2f6ffa0b60db875de9b017ca1bdbc0d1148678040fb8ed4aee6cd"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.174757 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be43f0ae09e2f6ffa0b60db875de9b017ca1bdbc0d1148678040fb8ed4aee6cd" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.174818 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-6hdqq" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.202338 4958 generic.go:334] "Generic (PLEG): container finished" podID="51b06b73-d14d-47de-84f1-feae2b3a1c9d" containerID="faf28f5f435a469ec7a4f687e61fd7c193ea9907cb3499c1d0d6a9849c2d47fc" exitCode=0 Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.202412 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02e5-account-create-update-c75bh" event={"ID":"51b06b73-d14d-47de-84f1-feae2b3a1c9d","Type":"ContainerDied","Data":"faf28f5f435a469ec7a4f687e61fd7c193ea9907cb3499c1d0d6a9849c2d47fc"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.221993 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-m2lhs" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.221988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-m2lhs" event={"ID":"76d9f5a2-d6d4-4390-965a-b5d29b134dc1","Type":"ContainerDied","Data":"92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.225541 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92ddd4e6837be52de16f1b18764e09d158a7b16c44b6bfbc550e40db0dab7d92" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.230567 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-x68cs" event={"ID":"48ba018f-da06-4772-835f-257269ffed57","Type":"ContainerDied","Data":"4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.230605 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a55e79a966e2e170a08c8e22201b692ce183a941289ceb10bf871ca77a302b0" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.230663 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-x68cs" Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.247870 4958 generic.go:334] "Generic (PLEG): container finished" podID="e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" containerID="9e2f9575aab360650618540ed4ad74b301812a33fa9f5ed553ac84c13b344270" exitCode=0 Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.248028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" event={"ID":"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab","Type":"ContainerDied","Data":"9e2f9575aab360650618540ed4ad74b301812a33fa9f5ed553ac84c13b344270"} Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.306806 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:55:58 crc kubenswrapper[4958]: I1206 05:55:58.729623 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.257507 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerStarted","Data":"5f188daea121015bce08bd9d0bda43000eb7d11ffbbeb6bf9173f575809dcf5e"} Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.257857 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerStarted","Data":"9ed5ac52b2e3b80f1535139877054ab4eeaec852edc5364ec13d795c8af1a0eb"} Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.520671 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.670244 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.681227 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="dnsmasq-dns" containerID="cri-o://ab61d954e4ba23150e0e3e0a36f2406f94134a9e29045175c273bd95cf01c8f3" gracePeriod=10 Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.789035 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.819618 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9mwr\" (UniqueName: \"kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr\") pod \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.819725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts\") pod \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\" (UID: \"51b06b73-d14d-47de-84f1-feae2b3a1c9d\") " Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.820691 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51b06b73-d14d-47de-84f1-feae2b3a1c9d" (UID: "51b06b73-d14d-47de-84f1-feae2b3a1c9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.858700 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr" (OuterVolumeSpecName: "kube-api-access-w9mwr") pod "51b06b73-d14d-47de-84f1-feae2b3a1c9d" (UID: "51b06b73-d14d-47de-84f1-feae2b3a1c9d"). InnerVolumeSpecName "kube-api-access-w9mwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.921638 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9mwr\" (UniqueName: \"kubernetes.io/projected/51b06b73-d14d-47de-84f1-feae2b3a1c9d-kube-api-access-w9mwr\") on node \"crc\" DevicePath \"\"" Dec 06 05:55:59 crc kubenswrapper[4958]: I1206 05:55:59.921684 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b06b73-d14d-47de-84f1-feae2b3a1c9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.097272 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.117590 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.125521 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wwzns\" (UniqueName: \"kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns\") pod \"4a9cb1a1-4312-4bbf-b731-de283fd78834\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.125744 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts\") pod \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.125826 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts\") pod \"4a9cb1a1-4312-4bbf-b731-de283fd78834\" (UID: \"4a9cb1a1-4312-4bbf-b731-de283fd78834\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.125915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j98ck\" (UniqueName: \"kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck\") pod \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\" (UID: \"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.126812 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" (UID: "e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.127504 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4a9cb1a1-4312-4bbf-b731-de283fd78834" (UID: "4a9cb1a1-4312-4bbf-b731-de283fd78834"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.133804 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns" (OuterVolumeSpecName: "kube-api-access-wwzns") pod "4a9cb1a1-4312-4bbf-b731-de283fd78834" (UID: "4a9cb1a1-4312-4bbf-b731-de283fd78834"). InnerVolumeSpecName "kube-api-access-wwzns". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.135345 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck" (OuterVolumeSpecName: "kube-api-access-j98ck") pod "e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" (UID: "e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab"). InnerVolumeSpecName "kube-api-access-j98ck". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.231326 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.231366 4958 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4a9cb1a1-4312-4bbf-b731-de283fd78834-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.231375 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j98ck\" (UniqueName: \"kubernetes.io/projected/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab-kube-api-access-j98ck\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.231385 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wwzns\" (UniqueName: \"kubernetes.io/projected/4a9cb1a1-4312-4bbf-b731-de283fd78834-kube-api-access-wwzns\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.258851 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.163:5353: connect: connection refused" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.280405 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.280690 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-d745-account-create-update-nn9n6" event={"ID":"e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab","Type":"ContainerDied","Data":"dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96"} Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.280740 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbe956356bb9b8942d23eeef97894bbb6d0534687adf0d1ab353602aff0ab96" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.305880 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7249-account-create-update-bw52q" event={"ID":"4a9cb1a1-4312-4bbf-b731-de283fd78834","Type":"ContainerDied","Data":"ef24e7e3f2b70cd10319956873b0aa54c98819acdcd1be9bc6ff169701f02cae"} Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.305941 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef24e7e3f2b70cd10319956873b0aa54c98819acdcd1be9bc6ff169701f02cae" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.306030 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7249-account-create-update-bw52q" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.309825 4958 generic.go:334] "Generic (PLEG): container finished" podID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerID="ab61d954e4ba23150e0e3e0a36f2406f94134a9e29045175c273bd95cf01c8f3" exitCode=0 Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.309878 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerDied","Data":"ab61d954e4ba23150e0e3e0a36f2406f94134a9e29045175c273bd95cf01c8f3"} Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.318505 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-02e5-account-create-update-c75bh" event={"ID":"51b06b73-d14d-47de-84f1-feae2b3a1c9d","Type":"ContainerDied","Data":"d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca"} Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.318550 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6dcdc1d90c9b2862498de5080d93971983614633078c3a8c6c4e9ffcf33baca" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.318618 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-02e5-account-create-update-c75bh" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.792522 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958391 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958456 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958561 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958644 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvxmr\" (UniqueName: \"kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958678 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.958779 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config\") pod \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\" (UID: \"382db5d4-9ab1-4fe8-a35e-1f10c1690f06\") " Dec 06 05:56:00 crc kubenswrapper[4958]: I1206 05:56:00.968099 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr" (OuterVolumeSpecName: "kube-api-access-zvxmr") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "kube-api-access-zvxmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.024681 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.049121 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config" (OuterVolumeSpecName: "config") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.060903 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.060936 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvxmr\" (UniqueName: \"kubernetes.io/projected/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-kube-api-access-zvxmr\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.118513 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.155863 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.165294 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.166337 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.186582 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.267397 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.312932 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "382db5d4-9ab1-4fe8-a35e-1f10c1690f06" (UID: "382db5d4-9ab1-4fe8-a35e-1f10c1690f06"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.327942 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerStarted","Data":"560b1acbb592eb7f2de4769063025e3f325a2d91df8b8fcd173327e395d7dec7"} Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.330894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" event={"ID":"382db5d4-9ab1-4fe8-a35e-1f10c1690f06","Type":"ContainerDied","Data":"b52bdeff0ba5ec6fbff8f9d9b29a6789ec9d525cd0467d4464731411e88c6380"} Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.331042 4958 scope.go:117] "RemoveContainer" containerID="ab61d954e4ba23150e0e3e0a36f2406f94134a9e29045175c273bd95cf01c8f3" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.330977 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cf77b4997-jq68d" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.363168 4958 scope.go:117] "RemoveContainer" containerID="8fe92a9e6cba02192f864527db0d76fc7168b9fc5fc1d2c627d232a575c8d7b3" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.369128 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/382db5d4-9ab1-4fe8-a35e-1f10c1690f06-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.392035 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.413744 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cf77b4997-jq68d"] Dec 06 05:56:01 crc kubenswrapper[4958]: I1206 05:56:01.780430 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" path="/var/lib/kubelet/pods/382db5d4-9ab1-4fe8-a35e-1f10c1690f06/volumes" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312082 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tw6sr"] Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312646 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d9f5a2-d6d4-4390-965a-b5d29b134dc1" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312662 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d9f5a2-d6d4-4390-965a-b5d29b134dc1" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312674 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="init" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312682 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="init" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312695 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="dnsmasq-dns" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312701 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="dnsmasq-dns" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312710 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a9cb1a1-4312-4bbf-b731-de283fd78834" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312715 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a9cb1a1-4312-4bbf-b731-de283fd78834" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312731 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312736 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312748 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e23f416-b2db-49d9-8a31-f68d32ff9b51" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312754 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e23f416-b2db-49d9-8a31-f68d32ff9b51" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312774 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48ba018f-da06-4772-835f-257269ffed57" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312780 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48ba018f-da06-4772-835f-257269ffed57" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: E1206 05:56:02.312798 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b06b73-d14d-47de-84f1-feae2b3a1c9d" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.312805 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b06b73-d14d-47de-84f1-feae2b3a1c9d" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313012 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d9f5a2-d6d4-4390-965a-b5d29b134dc1" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313030 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a9cb1a1-4312-4bbf-b731-de283fd78834" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313039 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="48ba018f-da06-4772-835f-257269ffed57" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313056 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="382db5d4-9ab1-4fe8-a35e-1f10c1690f06" containerName="dnsmasq-dns" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313070 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e23f416-b2db-49d9-8a31-f68d32ff9b51" containerName="mariadb-database-create" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313077 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b06b73-d14d-47de-84f1-feae2b3a1c9d" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313091 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" containerName="mariadb-account-create-update" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.313834 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.316117 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.316724 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.316764 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p6jjb" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.328555 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tw6sr"] Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.341858 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerStarted","Data":"1bad73066a008d25c27243825f98ad8fb6175004a21dc8a8f8c582021806cabf"} Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.384071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.384238 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5vb4\" (UniqueName: \"kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.384309 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.384407 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.485104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.485463 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.485547 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5vb4\" (UniqueName: \"kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.485590 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.494907 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.496372 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.498890 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.505534 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5vb4\" (UniqueName: \"kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4\") pod \"nova-cell0-conductor-db-sync-tw6sr\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.579067 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:02 crc kubenswrapper[4958]: I1206 05:56:02.633513 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:03 crc kubenswrapper[4958]: W1206 05:56:03.205221 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd874f46b_0e0f_4304_8b7d_43a68d87dd5d.slice/crio-a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7 WatchSource:0}: Error finding container a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7: Status 404 returned error can't find the container with id a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7 Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.216071 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tw6sr"] Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357069 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerStarted","Data":"47bbb8c1d726518f7c149b4c164f0469f702d4eb071e97f0d7009ed2de2d7ed5"} Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357117 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-central-agent" containerID="cri-o://5f188daea121015bce08bd9d0bda43000eb7d11ffbbeb6bf9173f575809dcf5e" gracePeriod=30 Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357156 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357177 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="sg-core" containerID="cri-o://1bad73066a008d25c27243825f98ad8fb6175004a21dc8a8f8c582021806cabf" gracePeriod=30 Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357189 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="proxy-httpd" containerID="cri-o://47bbb8c1d726518f7c149b4c164f0469f702d4eb071e97f0d7009ed2de2d7ed5" gracePeriod=30 Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.357222 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-notification-agent" containerID="cri-o://560b1acbb592eb7f2de4769063025e3f325a2d91df8b8fcd173327e395d7dec7" gracePeriod=30 Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.359550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" event={"ID":"d874f46b-0e0f-4304-8b7d-43a68d87dd5d","Type":"ContainerStarted","Data":"a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7"} Dec 06 05:56:03 crc kubenswrapper[4958]: I1206 05:56:03.387800 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.881033607 podStartE2EDuration="6.387781238s" podCreationTimestamp="2025-12-06 05:55:57 +0000 UTC" firstStartedPulling="2025-12-06 05:55:58.306441218 +0000 UTC m=+1668.840211971" lastFinishedPulling="2025-12-06 05:56:02.813188829 +0000 UTC m=+1673.346959602" observedRunningTime="2025-12-06 05:56:03.387279185 +0000 UTC m=+1673.921049958" watchObservedRunningTime="2025-12-06 05:56:03.387781238 +0000 UTC m=+1673.921552001" Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374555 4958 generic.go:334] "Generic (PLEG): container finished" podID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerID="47bbb8c1d726518f7c149b4c164f0469f702d4eb071e97f0d7009ed2de2d7ed5" exitCode=0 Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374583 4958 generic.go:334] "Generic (PLEG): container finished" podID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerID="1bad73066a008d25c27243825f98ad8fb6175004a21dc8a8f8c582021806cabf" exitCode=2 Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374594 4958 generic.go:334] "Generic (PLEG): container finished" podID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerID="560b1acbb592eb7f2de4769063025e3f325a2d91df8b8fcd173327e395d7dec7" exitCode=0 Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerDied","Data":"47bbb8c1d726518f7c149b4c164f0469f702d4eb071e97f0d7009ed2de2d7ed5"} Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerDied","Data":"1bad73066a008d25c27243825f98ad8fb6175004a21dc8a8f8c582021806cabf"} Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.374676 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerDied","Data":"560b1acbb592eb7f2de4769063025e3f325a2d91df8b8fcd173327e395d7dec7"} Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.377010 4958 generic.go:334] "Generic (PLEG): container finished" podID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" containerID="0ee33f13d8a587dadbec3566136aeffe22163de9bdcb7be217d5121e270b9643" exitCode=0 Dec 06 05:56:04 crc kubenswrapper[4958]: I1206 05:56:04.377041 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2scr7" event={"ID":"00f464ea-7983-4ab2-b2b1-07bf67c76e31","Type":"ContainerDied","Data":"0ee33f13d8a587dadbec3566136aeffe22163de9bdcb7be217d5121e270b9643"} Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.388004 4958 generic.go:334] "Generic (PLEG): container finished" podID="fee2c3d7-24fe-4966-878b-90147b8f5cfb" containerID="25420bf258570f8bfa0e1683097b68eb745bc3426ee0c7a15ed415e9b11b16a6" exitCode=0 Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.388080 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmfxp" event={"ID":"fee2c3d7-24fe-4966-878b-90147b8f5cfb","Type":"ContainerDied","Data":"25420bf258570f8bfa0e1683097b68eb745bc3426ee0c7a15ed415e9b11b16a6"} Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.747388 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.827089 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2scr7" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961425 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961537 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961580 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961643 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961677 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5kss\" (UniqueName: \"kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.961703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id\") pod \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\" (UID: \"00f464ea-7983-4ab2-b2b1-07bf67c76e31\") " Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.963201 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.968132 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts" (OuterVolumeSpecName: "scripts") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.973351 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss" (OuterVolumeSpecName: "kube-api-access-m5kss") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "kube-api-access-m5kss". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.981431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:05 crc kubenswrapper[4958]: I1206 05:56:05.982069 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-85d57ddd5d-pth8g" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.020889 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.063943 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.063981 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.063995 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5kss\" (UniqueName: \"kubernetes.io/projected/00f464ea-7983-4ab2-b2b1-07bf67c76e31-kube-api-access-m5kss\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.064006 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/00f464ea-7983-4ab2-b2b1-07bf67c76e31-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.064014 4958 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.068282 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.069012 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api-log" containerID="cri-o://a47f761466377e6fe42c23e78bba0822d5a1f0e659b7c891dc1c3b002a6a246b" gracePeriod=30 Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.068404 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data" (OuterVolumeSpecName: "config-data") pod "00f464ea-7983-4ab2-b2b1-07bf67c76e31" (UID: "00f464ea-7983-4ab2-b2b1-07bf67c76e31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.069852 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api" containerID="cri-o://d4bf75dfb957bd22b663ebd59e2e2b74ee4d61ca34eff55b54c822e146631dc2" gracePeriod=30 Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.166924 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00f464ea-7983-4ab2-b2b1-07bf67c76e31-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.413968 4958 generic.go:334] "Generic (PLEG): container finished" podID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerID="a47f761466377e6fe42c23e78bba0822d5a1f0e659b7c891dc1c3b002a6a246b" exitCode=143 Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.414045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerDied","Data":"a47f761466377e6fe42c23e78bba0822d5a1f0e659b7c891dc1c3b002a6a246b"} Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.428868 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2scr7" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.430595 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2scr7" event={"ID":"00f464ea-7983-4ab2-b2b1-07bf67c76e31","Type":"ContainerDied","Data":"f65ed53fce799899a0065acb405f0661507c6f062717606d4673d2c739f139c3"} Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.430635 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f65ed53fce799899a0065acb405f0661507c6f062717606d4673d2c739f139c3" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.633243 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:06 crc kubenswrapper[4958]: E1206 05:56:06.636968 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" containerName="cinder-db-sync" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.637049 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" containerName="cinder-db-sync" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.637828 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" containerName="cinder-db-sync" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.638891 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.648126 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-nphrn" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.653905 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.654425 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.654461 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.683493 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694752 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694790 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694816 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694855 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.694914 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57br7\" (UniqueName: \"kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.725876 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.727423 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.764242 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.770369 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.797823 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.797875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.797901 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.797933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.797973 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57br7\" (UniqueName: \"kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798088 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798137 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798162 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798180 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.798217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.803792 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.830532 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.834913 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.848082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.848972 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.909050 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.919643 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.926694 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.926915 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.927036 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.927145 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.927995 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.926515 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.912528 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57br7\" (UniqueName: \"kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7\") pod \"cinder-scheduler-0\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.927503 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.928789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.929278 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.967923 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.969516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:06 crc kubenswrapper[4958]: I1206 05:56:06.975826 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.024628 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.051462 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.051533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.051565 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.051600 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4c7gq\" (UniqueName: \"kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.051627 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.052059 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.052084 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.082075 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c\") pod \"dnsmasq-dns-5cd5fc4d55-7ks57\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.106524 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157605 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157696 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157720 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157762 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4c7gq\" (UniqueName: \"kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.157784 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.158575 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.174322 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.174911 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.176281 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.182380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.186665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.209091 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4c7gq\" (UniqueName: \"kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq\") pod \"cinder-api-0\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.223337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.299014 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.424973 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.472286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wcwz\" (UniqueName: \"kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz\") pod \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.472435 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle\") pod \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.472518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config\") pod \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\" (UID: \"fee2c3d7-24fe-4966-878b-90147b8f5cfb\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.482994 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz" (OuterVolumeSpecName: "kube-api-access-7wcwz") pod "fee2c3d7-24fe-4966-878b-90147b8f5cfb" (UID: "fee2c3d7-24fe-4966-878b-90147b8f5cfb"). InnerVolumeSpecName "kube-api-access-7wcwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.489127 4958 generic.go:334] "Generic (PLEG): container finished" podID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerID="5f188daea121015bce08bd9d0bda43000eb7d11ffbbeb6bf9173f575809dcf5e" exitCode=0 Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.489205 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerDied","Data":"5f188daea121015bce08bd9d0bda43000eb7d11ffbbeb6bf9173f575809dcf5e"} Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.516912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-lmfxp" event={"ID":"fee2c3d7-24fe-4966-878b-90147b8f5cfb","Type":"ContainerDied","Data":"789b3f882c05d428259f945f97d432b83c0b8643a3ee6ba9f50a89f6e0f98d76"} Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.516953 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="789b3f882c05d428259f945f97d432b83c0b8643a3ee6ba9f50a89f6e0f98d76" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.517011 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-lmfxp" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.528661 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config" (OuterVolumeSpecName: "config") pod "fee2c3d7-24fe-4966-878b-90147b8f5cfb" (UID: "fee2c3d7-24fe-4966-878b-90147b8f5cfb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.549805 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fee2c3d7-24fe-4966-878b-90147b8f5cfb" (UID: "fee2c3d7-24fe-4966-878b-90147b8f5cfb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.577888 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.577921 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wcwz\" (UniqueName: \"kubernetes.io/projected/fee2c3d7-24fe-4966-878b-90147b8f5cfb-kube-api-access-7wcwz\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.577932 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fee2c3d7-24fe-4966-878b-90147b8f5cfb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.882478 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.985704 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrhm6\" (UniqueName: \"kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.985769 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.985879 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.986015 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.986085 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.986120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.986162 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts\") pod \"aeb4440b-3f18-4066-bb27-ec75e09208f9\" (UID: \"aeb4440b-3f18-4066-bb27-ec75e09208f9\") " Dec 06 05:56:07 crc kubenswrapper[4958]: I1206 05:56:07.993685 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts" (OuterVolumeSpecName: "scripts") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:07.997959 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6" (OuterVolumeSpecName: "kube-api-access-zrhm6") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "kube-api-access-zrhm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:07.998319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:07.998671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.090181 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.094990 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.095022 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.095031 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrhm6\" (UniqueName: \"kubernetes.io/projected/aeb4440b-3f18-4066-bb27-ec75e09208f9-kube-api-access-zrhm6\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.095043 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/aeb4440b-3f18-4066-bb27-ec75e09208f9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.095051 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.151796 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.196801 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.228769 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.233590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data" (OuterVolumeSpecName: "config-data") pod "aeb4440b-3f18-4066-bb27-ec75e09208f9" (UID: "aeb4440b-3f18-4066-bb27-ec75e09208f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.245101 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.258416 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.313137 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aeb4440b-3f18-4066-bb27-ec75e09208f9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.538896 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerStarted","Data":"e77883bc11d4c9c23ffe73203c7c2684695b69ae76d5afaa38f376f387726f57"} Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.544675 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerStarted","Data":"5bacc62e1882e0ed61c9f105af7af97931a9456267066c5afbed2c0282549a66"} Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.547567 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" event={"ID":"74c7564e-2945-4712-9087-f275ca61881b","Type":"ContainerStarted","Data":"e289fa3bff55492f6ff0d122429c197fa8a9d9ea3ec81fa89f771a92e4689426"} Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.552611 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerStarted","Data":"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3"} Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.571076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"aeb4440b-3f18-4066-bb27-ec75e09208f9","Type":"ContainerDied","Data":"9ed5ac52b2e3b80f1535139877054ab4eeaec852edc5364ec13d795c8af1a0eb"} Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.571133 4958 scope.go:117] "RemoveContainer" containerID="47bbb8c1d726518f7c149b4c164f0469f702d4eb071e97f0d7009ed2de2d7ed5" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.571323 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.622890 4958 scope.go:117] "RemoveContainer" containerID="1bad73066a008d25c27243825f98ad8fb6175004a21dc8a8f8c582021806cabf" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.644593 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.651862 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.698645 4958 scope.go:117] "RemoveContainer" containerID="560b1acbb592eb7f2de4769063025e3f325a2d91df8b8fcd173327e395d7dec7" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.705593 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: E1206 05:56:08.706093 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="proxy-httpd" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706114 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="proxy-httpd" Dec 06 05:56:08 crc kubenswrapper[4958]: E1206 05:56:08.706130 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-central-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706137 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-central-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: E1206 05:56:08.706152 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fee2c3d7-24fe-4966-878b-90147b8f5cfb" containerName="neutron-db-sync" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706160 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee2c3d7-24fe-4966-878b-90147b8f5cfb" containerName="neutron-db-sync" Dec 06 05:56:08 crc kubenswrapper[4958]: E1206 05:56:08.706180 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="sg-core" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706187 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="sg-core" Dec 06 05:56:08 crc kubenswrapper[4958]: E1206 05:56:08.706214 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-notification-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706221 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-notification-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706386 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="proxy-httpd" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706399 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fee2c3d7-24fe-4966-878b-90147b8f5cfb" containerName="neutron-db-sync" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706409 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="sg-core" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706422 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-notification-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.706436 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" containerName="ceilometer-central-agent" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.708116 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.731646 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.731674 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.732965 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.835728 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.848825 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.848932 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.848999 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.849139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69hnl\" (UniqueName: \"kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.849168 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.849219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.849275 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.869607 4958 scope.go:117] "RemoveContainer" containerID="5f188daea121015bce08bd9d0bda43000eb7d11ffbbeb6bf9173f575809dcf5e" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.872554 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.903731 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.916159 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.934869 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.953767 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.955637 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.955690 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.955789 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69hnl\" (UniqueName: \"kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.955825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.955943 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.956143 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-j4btr" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.956297 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.956433 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.963193 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.964540 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.972675 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.972777 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.972859 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.974106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.993035 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:08 crc kubenswrapper[4958]: I1206 05:56:08.997876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.008784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69hnl\" (UniqueName: \"kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.030274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.053846 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " pod="openstack/ceilometer-0" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-www5f\" (UniqueName: \"kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074320 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074363 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074391 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074417 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074480 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074520 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074554 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074585 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.074604 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.177081 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-www5f\" (UniqueName: \"kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178622 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178694 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178736 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178788 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178873 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.178943 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.179010 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.179073 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.179103 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.179508 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.180270 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.180783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.181289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.182208 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.187446 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.189957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.198196 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.209226 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.212700 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-www5f\" (UniqueName: \"kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f\") pod \"neutron-59469c77f6-x865t\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.215305 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w\") pod \"dnsmasq-dns-75958fc765-7vts5\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.269862 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.454065 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.816883 4958 generic.go:334] "Generic (PLEG): container finished" podID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerID="d4bf75dfb957bd22b663ebd59e2e2b74ee4d61ca34eff55b54c822e146631dc2" exitCode=0 Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.827973 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aeb4440b-3f18-4066-bb27-ec75e09208f9" path="/var/lib/kubelet/pods/aeb4440b-3f18-4066-bb27-ec75e09208f9/volumes" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.829064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerDied","Data":"d4bf75dfb957bd22b663ebd59e2e2b74ee4d61ca34eff55b54c822e146631dc2"} Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.835265 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" event={"ID":"74c7564e-2945-4712-9087-f275ca61881b","Type":"ContainerStarted","Data":"5382d69285f46677e1fd7aac87c968605b36f0534dbd31068498e940114f14ac"} Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.863628 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": dial tcp 10.217.0.182:9311: connect: connection refused" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.863799 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.182:9311/healthcheck\": dial tcp 10.217.0.182:9311: connect: connection refused" Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.867198 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:56:09 crc kubenswrapper[4958]: I1206 05:56:09.867239 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.078713 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.202679 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:56:10 crc kubenswrapper[4958]: W1206 05:56:10.204740 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8171df7d_9d9e_4c16_ab55_8a2401a919d2.slice/crio-6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a WatchSource:0}: Error finding container 6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a: Status 404 returned error can't find the container with id 6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.372497 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.535562 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.570915 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85d57ddd5d-pth8g" podUID="b7770f7e-3112-4c47-8631-a19d269c3ffc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.189:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.571014 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-85d57ddd5d-pth8g" podUID="b7770f7e-3112-4c47-8631-a19d269c3ffc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.189:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.855968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-7vts5" event={"ID":"8171df7d-9d9e-4c16-ab55-8a2401a919d2","Type":"ContainerStarted","Data":"6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a"} Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.859220 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerStarted","Data":"56b2a841f574c73e5612e30058171f1c1fad37e927a3a18ebb5f5f4a0cbbd4cb"} Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.860729 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerStarted","Data":"2d7592b243eb6af927ff0da2300082a0b1aac1011399d19b07c7101e554bb42b"} Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.865678 4958 generic.go:334] "Generic (PLEG): container finished" podID="74c7564e-2945-4712-9087-f275ca61881b" containerID="5382d69285f46677e1fd7aac87c968605b36f0534dbd31068498e940114f14ac" exitCode=0 Dec 06 05:56:10 crc kubenswrapper[4958]: I1206 05:56:10.865720 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" event={"ID":"74c7564e-2945-4712-9087-f275ca61881b","Type":"ContainerDied","Data":"5382d69285f46677e1fd7aac87c968605b36f0534dbd31068498e940114f14ac"} Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.472164 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.597676 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-754c8966f6-f7t66" Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.894627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerStarted","Data":"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b"} Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.917588 4958 generic.go:334] "Generic (PLEG): container finished" podID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerID="f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c" exitCode=0 Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.917667 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-7vts5" event={"ID":"8171df7d-9d9e-4c16-ab55-8a2401a919d2","Type":"ContainerDied","Data":"f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c"} Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.936958 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerStarted","Data":"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf"} Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.954844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" event={"ID":"74c7564e-2945-4712-9087-f275ca61881b","Type":"ContainerDied","Data":"e289fa3bff55492f6ff0d122429c197fa8a9d9ea3ec81fa89f771a92e4689426"} Dec 06 05:56:11 crc kubenswrapper[4958]: I1206 05:56:11.954885 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e289fa3bff55492f6ff0d122429c197fa8a9d9ea3ec81fa89f771a92e4689426" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.179608 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.191796 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351017 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351080 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle\") pod \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351105 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9c6k\" (UniqueName: \"kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k\") pod \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351125 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351170 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351214 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs\") pod \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351313 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351387 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351406 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom\") pod \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351424 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data\") pod \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\" (UID: \"b8c45202-b5a4-47bd-9754-bc5ae17b1208\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.351447 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config\") pod \"74c7564e-2945-4712-9087-f275ca61881b\" (UID: \"74c7564e-2945-4712-9087-f275ca61881b\") " Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.352544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs" (OuterVolumeSpecName: "logs") pod "b8c45202-b5a4-47bd-9754-bc5ae17b1208" (UID: "b8c45202-b5a4-47bd-9754-bc5ae17b1208"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.455638 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8c45202-b5a4-47bd-9754-bc5ae17b1208-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.967354 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5cd5fc4d55-7ks57" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.968259 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.968522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-76c7c6dc4b-9jzd4" event={"ID":"b8c45202-b5a4-47bd-9754-bc5ae17b1208","Type":"ContainerDied","Data":"adbbea2393542cbb683d23f4fc158b140a1bfe6bf001a9a8962381473d55fc6e"} Dec 06 05:56:12 crc kubenswrapper[4958]: I1206 05:56:12.968582 4958 scope.go:117] "RemoveContainer" containerID="d4bf75dfb957bd22b663ebd59e2e2b74ee4d61ca34eff55b54c822e146631dc2" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.018244 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.378310 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "b8c45202-b5a4-47bd-9754-bc5ae17b1208" (UID: "b8c45202-b5a4-47bd-9754-bc5ae17b1208"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.378361 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379117 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k" (OuterVolumeSpecName: "kube-api-access-j9c6k") pod "b8c45202-b5a4-47bd-9754-bc5ae17b1208" (UID: "b8c45202-b5a4-47bd-9754-bc5ae17b1208"). InnerVolumeSpecName "kube-api-access-j9c6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379170 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379163 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379231 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379192 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.379308 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c" (OuterVolumeSpecName: "kube-api-access-dfd6c") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "kube-api-access-dfd6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.380100 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.382218 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config" (OuterVolumeSpecName: "config") pod "74c7564e-2945-4712-9087-f275ca61881b" (UID: "74c7564e-2945-4712-9087-f275ca61881b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.397687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8c45202-b5a4-47bd-9754-bc5ae17b1208" (UID: "b8c45202-b5a4-47bd-9754-bc5ae17b1208"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.411871 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.430976 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data" (OuterVolumeSpecName: "config-data") pod "b8c45202-b5a4-47bd-9754-bc5ae17b1208" (UID: "b8c45202-b5a4-47bd-9754-bc5ae17b1208"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481231 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481269 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9c6k\" (UniqueName: \"kubernetes.io/projected/b8c45202-b5a4-47bd-9754-bc5ae17b1208-kube-api-access-j9c6k\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481282 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481293 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481304 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481315 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8c45202-b5a4-47bd-9754-bc5ae17b1208-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481322 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74c7564e-2945-4712-9087-f275ca61881b-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.481330 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfd6c\" (UniqueName: \"kubernetes.io/projected/74c7564e-2945-4712-9087-f275ca61881b-kube-api-access-dfd6c\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.584770 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.584991 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-log" containerID="cri-o://0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77" gracePeriod=30 Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.585301 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-httpd" containerID="cri-o://18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45" gracePeriod=30 Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.677094 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.694786 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5cd5fc4d55-7ks57"] Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.696015 4958 scope.go:117] "RemoveContainer" containerID="a47f761466377e6fe42c23e78bba0822d5a1f0e659b7c891dc1c3b002a6a246b" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.723817 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.731324 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-76c7c6dc4b-9jzd4"] Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.811300 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74c7564e-2945-4712-9087-f275ca61881b" path="/var/lib/kubelet/pods/74c7564e-2945-4712-9087-f275ca61881b/volumes" Dec 06 05:56:13 crc kubenswrapper[4958]: I1206 05:56:13.812145 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" path="/var/lib/kubelet/pods/b8c45202-b5a4-47bd-9754-bc5ae17b1208/volumes" Dec 06 05:56:14 crc kubenswrapper[4958]: I1206 05:56:14.023400 4958 generic.go:334] "Generic (PLEG): container finished" podID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerID="0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77" exitCode=143 Dec 06 05:56:14 crc kubenswrapper[4958]: I1206 05:56:14.023453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerDied","Data":"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77"} Dec 06 05:56:14 crc kubenswrapper[4958]: I1206 05:56:14.030312 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:14 crc kubenswrapper[4958]: I1206 05:56:14.104039 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.071823 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerStarted","Data":"3520d3f3fe76c7bcacc678e295ab6a758048db83fc553c4346c490b7dd72f837"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.082167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerStarted","Data":"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.082291 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.084453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerStarted","Data":"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.084768 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api-log" containerID="cri-o://e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b" gracePeriod=30 Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.084823 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.084816 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api" containerID="cri-o://9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e" gracePeriod=30 Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.098228 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-7vts5" event={"ID":"8171df7d-9d9e-4c16-ab55-8a2401a919d2","Type":"ContainerStarted","Data":"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.098561 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.131031 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-59469c77f6-x865t" podStartSLOduration=7.131010074 podStartE2EDuration="7.131010074s" podCreationTimestamp="2025-12-06 05:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:15.112990089 +0000 UTC m=+1685.646760852" watchObservedRunningTime="2025-12-06 05:56:15.131010074 +0000 UTC m=+1685.664780837" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.144717 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=9.144696182 podStartE2EDuration="9.144696182s" podCreationTimestamp="2025-12-06 05:56:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:15.132116084 +0000 UTC m=+1685.665886847" watchObservedRunningTime="2025-12-06 05:56:15.144696182 +0000 UTC m=+1685.678466945" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.153752 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerStarted","Data":"c9f6ded91d18a5963858bc39145d677ec33fb4944d10cf6088f71f6b685bb83b"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.159225 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerStarted","Data":"67be8f75ccdf0af709b61db6219a2f0fd5dd5c1762edc4de5cb9e68caa20b3a5"} Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.174449 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-75958fc765-7vts5" podStartSLOduration=7.174412421 podStartE2EDuration="7.174412421s" podCreationTimestamp="2025-12-06 05:56:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:15.168993755 +0000 UTC m=+1685.702764538" watchObservedRunningTime="2025-12-06 05:56:15.174412421 +0000 UTC m=+1685.708183184" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.281483 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d7c696d85-2npl6"] Dec 06 05:56:15 crc kubenswrapper[4958]: E1206 05:56:15.281867 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api-log" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.281883 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api-log" Dec 06 05:56:15 crc kubenswrapper[4958]: E1206 05:56:15.281909 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.281916 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api" Dec 06 05:56:15 crc kubenswrapper[4958]: E1206 05:56:15.281929 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74c7564e-2945-4712-9087-f275ca61881b" containerName="init" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.281935 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="74c7564e-2945-4712-9087-f275ca61881b" containerName="init" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.282109 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.282128 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8c45202-b5a4-47bd-9754-bc5ae17b1208" containerName="barbican-api-log" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.282142 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="74c7564e-2945-4712-9087-f275ca61881b" containerName="init" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.283097 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.287523 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.287707 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.293268 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d7c696d85-2npl6"] Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.437703 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-ovndb-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.437801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-internal-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.437845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6tq\" (UniqueName: \"kubernetes.io/projected/395f8723-6487-48ac-b83f-d073c550bb99-kube-api-access-8r6tq\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.437888 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.437951 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-combined-ca-bundle\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.438009 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-httpd-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.438074 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-public-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-httpd-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541451 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-public-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-ovndb-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-internal-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541644 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6tq\" (UniqueName: \"kubernetes.io/projected/395f8723-6487-48ac-b83f-d073c550bb99-kube-api-access-8r6tq\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541712 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.541823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-combined-ca-bundle\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.546303 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-combined-ca-bundle\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.546335 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-internal-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.546896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-httpd-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.548403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-public-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.549810 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-config\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.550589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/395f8723-6487-48ac-b83f-d073c550bb99-ovndb-tls-certs\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.562517 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6tq\" (UniqueName: \"kubernetes.io/projected/395f8723-6487-48ac-b83f-d073c550bb99-kube-api-access-8r6tq\") pod \"neutron-d7c696d85-2npl6\" (UID: \"395f8723-6487-48ac-b83f-d073c550bb99\") " pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.714669 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:15 crc kubenswrapper[4958]: I1206 05:56:15.968512 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.232757 4958 generic.go:334] "Generic (PLEG): container finished" podID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerID="e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b" exitCode=143 Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.232819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerDied","Data":"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b"} Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.257221 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.257862 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-log" containerID="cri-o://3e50a49fc0a13e1f97ab9ad64a5ac0e8effd26e04f5b8b3c08455857413e0efc" gracePeriod=30 Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.257997 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-httpd" containerID="cri-o://40b1dc4197111e8b813842bb4fb3027f60c6121238deac95d6403eb96469739c" gracePeriod=30 Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.259901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerStarted","Data":"adf6393aeff21c1d04d6ddc97d9c2aa8523b7960ff52a46d0408fd56d47a322a"} Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.271723 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.273923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerStarted","Data":"f98d3a508f2b06161c30ee1fe2a5c6da09f4b1bc65f2b8698b415427727829ee"} Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.281624 4958 generic.go:334] "Generic (PLEG): container finished" podID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerID="18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45" exitCode=0 Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.281882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerDied","Data":"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45"} Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.281944 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b8f111e9-7ea4-4c7a-935a-95c9a90fea92","Type":"ContainerDied","Data":"159554576ed7267482742ca19d0d75d66bd9074bf28bad6d2bea25dcee7724ea"} Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.281964 4958 scope.go:117] "RemoveContainer" containerID="18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.282130 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.367989 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.911608049 podStartE2EDuration="10.367968662s" podCreationTimestamp="2025-12-06 05:56:06 +0000 UTC" firstStartedPulling="2025-12-06 05:56:08.251437766 +0000 UTC m=+1678.785208529" lastFinishedPulling="2025-12-06 05:56:13.707798369 +0000 UTC m=+1684.241569142" observedRunningTime="2025-12-06 05:56:16.34929541 +0000 UTC m=+1686.883066173" watchObservedRunningTime="2025-12-06 05:56:16.367968662 +0000 UTC m=+1686.901739425" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377432 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377500 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377536 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377691 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5f2v\" (UniqueName: \"kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377714 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377733 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377761 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.377782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs\") pod \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\" (UID: \"b8f111e9-7ea4-4c7a-935a-95c9a90fea92\") " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.379851 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs" (OuterVolumeSpecName: "logs") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.382111 4958 scope.go:117] "RemoveContainer" containerID="0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.391417 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.398010 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v" (OuterVolumeSpecName: "kube-api-access-n5f2v") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "kube-api-access-n5f2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.408073 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.408230 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts" (OuterVolumeSpecName: "scripts") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.467553 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d7c696d85-2npl6"] Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.480395 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5f2v\" (UniqueName: \"kubernetes.io/projected/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-kube-api-access-n5f2v\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.480426 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.480437 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.480446 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.480454 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.487965 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.544830 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data" (OuterVolumeSpecName: "config-data") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.572188 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.588220 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.588263 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.588283 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.607643 4958 scope.go:117] "RemoveContainer" containerID="18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45" Dec 06 05:56:16 crc kubenswrapper[4958]: E1206 05:56:16.608192 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45\": container with ID starting with 18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45 not found: ID does not exist" containerID="18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.608277 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45"} err="failed to get container status \"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45\": rpc error: code = NotFound desc = could not find container \"18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45\": container with ID starting with 18311ad2f8453ee36fd33fe67edc2805b11fa1621c69a9b3a6a572fa6f26db45 not found: ID does not exist" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.608316 4958 scope.go:117] "RemoveContainer" containerID="0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77" Dec 06 05:56:16 crc kubenswrapper[4958]: E1206 05:56:16.610963 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77\": container with ID starting with 0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77 not found: ID does not exist" containerID="0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.611003 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77"} err="failed to get container status \"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77\": rpc error: code = NotFound desc = could not find container \"0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77\": container with ID starting with 0415d967e450cffefd8ef62695f0fe73c73a3e186913a550f276d172611d8c77 not found: ID does not exist" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.622831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "b8f111e9-7ea4-4c7a-935a-95c9a90fea92" (UID: "b8f111e9-7ea4-4c7a-935a-95c9a90fea92"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:16 crc kubenswrapper[4958]: I1206 05:56:16.690816 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8f111e9-7ea4-4c7a-935a-95c9a90fea92-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.028452 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.210879 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.227850 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.248529 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:17 crc kubenswrapper[4958]: E1206 05:56:17.248962 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-log" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.248976 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-log" Dec 06 05:56:17 crc kubenswrapper[4958]: E1206 05:56:17.249023 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-httpd" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.249030 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-httpd" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.249193 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-log" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.249208 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" containerName="glance-httpd" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.250241 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.255926 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.256111 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.273304 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305521 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305586 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjsj\" (UniqueName: \"kubernetes.io/projected/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-kube-api-access-qtjsj\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305631 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.305911 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.306080 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.311365 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerStarted","Data":"1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92"} Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.311627 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-central-agent" containerID="cri-o://c9f6ded91d18a5963858bc39145d677ec33fb4944d10cf6088f71f6b685bb83b" gracePeriod=30 Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.311724 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.312051 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="proxy-httpd" containerID="cri-o://1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92" gracePeriod=30 Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.312095 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="sg-core" containerID="cri-o://adf6393aeff21c1d04d6ddc97d9c2aa8523b7960ff52a46d0408fd56d47a322a" gracePeriod=30 Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.312150 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-notification-agent" containerID="cri-o://67be8f75ccdf0af709b61db6219a2f0fd5dd5c1762edc4de5cb9e68caa20b3a5" gracePeriod=30 Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.335714 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d7c696d85-2npl6" event={"ID":"395f8723-6487-48ac-b83f-d073c550bb99","Type":"ContainerStarted","Data":"3c9c39753e738cbed25255f48fb4958f8dd4cb39edb08de53592fc6fc312d22d"} Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.335761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d7c696d85-2npl6" event={"ID":"395f8723-6487-48ac-b83f-d073c550bb99","Type":"ContainerStarted","Data":"4bf93cc28cf4961233120210718768b623da6f1d95134867814b59dd267cbcc5"} Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.352713 4958 generic.go:334] "Generic (PLEG): container finished" podID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerID="3e50a49fc0a13e1f97ab9ad64a5ac0e8effd26e04f5b8b3c08455857413e0efc" exitCode=143 Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.352783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerDied","Data":"3e50a49fc0a13e1f97ab9ad64a5ac0e8effd26e04f5b8b3c08455857413e0efc"} Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.407956 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408066 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408093 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjsj\" (UniqueName: \"kubernetes.io/projected/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-kube-api-access-qtjsj\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408122 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408158 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408171 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408196 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408213 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.408636 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.415186 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.423184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-logs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.431896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.432366 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.439059 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-scripts\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.441502 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-config-data\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.464418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjsj\" (UniqueName: \"kubernetes.io/projected/cf2cb807-c3e4-475e-a8fe-4ad4134e383e-kube-api-access-qtjsj\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.465834 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"cf2cb807-c3e4-475e-a8fe-4ad4134e383e\") " pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.615964 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Dec 06 05:56:17 crc kubenswrapper[4958]: I1206 05:56:17.797031 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8f111e9-7ea4-4c7a-935a-95c9a90fea92" path="/var/lib/kubelet/pods/b8f111e9-7ea4-4c7a-935a-95c9a90fea92/volumes" Dec 06 05:56:17 crc kubenswrapper[4958]: E1206 05:56:17.893164 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64912e89_a0ce_4858_a22a_3f873669dfd2.slice/crio-conmon-1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod64912e89_a0ce_4858_a22a_3f873669dfd2.slice/crio-1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372840 4958 generic.go:334] "Generic (PLEG): container finished" podID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerID="1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92" exitCode=0 Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372878 4958 generic.go:334] "Generic (PLEG): container finished" podID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerID="adf6393aeff21c1d04d6ddc97d9c2aa8523b7960ff52a46d0408fd56d47a322a" exitCode=2 Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372886 4958 generic.go:334] "Generic (PLEG): container finished" podID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerID="67be8f75ccdf0af709b61db6219a2f0fd5dd5c1762edc4de5cb9e68caa20b3a5" exitCode=0 Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372894 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerDied","Data":"1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92"} Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372946 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerDied","Data":"adf6393aeff21c1d04d6ddc97d9c2aa8523b7960ff52a46d0408fd56d47a322a"} Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.372960 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerDied","Data":"67be8f75ccdf0af709b61db6219a2f0fd5dd5c1762edc4de5cb9e68caa20b3a5"} Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.376045 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d7c696d85-2npl6" event={"ID":"395f8723-6487-48ac-b83f-d073c550bb99","Type":"ContainerStarted","Data":"f9071fd9ae7326ce41a2abddd3860905ad9e0ae7e9465ffbc62d61c704dbbd3d"} Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.376212 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.380522 4958 generic.go:334] "Generic (PLEG): container finished" podID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerID="40b1dc4197111e8b813842bb4fb3027f60c6121238deac95d6403eb96469739c" exitCode=0 Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.380603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerDied","Data":"40b1dc4197111e8b813842bb4fb3027f60c6121238deac95d6403eb96469739c"} Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.411657 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d7c696d85-2npl6" podStartSLOduration=3.411632506 podStartE2EDuration="3.411632506s" podCreationTimestamp="2025-12-06 05:56:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:18.404444381 +0000 UTC m=+1688.938215144" watchObservedRunningTime="2025-12-06 05:56:18.411632506 +0000 UTC m=+1688.945403269" Dec 06 05:56:18 crc kubenswrapper[4958]: I1206 05:56:18.417733 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.822641813 podStartE2EDuration="10.417716712s" podCreationTimestamp="2025-12-06 05:56:08 +0000 UTC" firstStartedPulling="2025-12-06 05:56:10.085617561 +0000 UTC m=+1680.619388324" lastFinishedPulling="2025-12-06 05:56:16.68069246 +0000 UTC m=+1687.214463223" observedRunningTime="2025-12-06 05:56:17.334738875 +0000 UTC m=+1687.868509638" watchObservedRunningTime="2025-12-06 05:56:18.417716712 +0000 UTC m=+1688.951487465" Dec 06 05:56:19 crc kubenswrapper[4958]: I1206 05:56:19.272359 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:56:19 crc kubenswrapper[4958]: I1206 05:56:19.348797 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:56:19 crc kubenswrapper[4958]: I1206 05:56:19.349070 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" containerID="cri-o://c875f65c6d69d8a7bff1222c5a1842c71bdc83ff81052f08b01b31fbf735550f" gracePeriod=10 Dec 06 05:56:19 crc kubenswrapper[4958]: I1206 05:56:19.519812 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: connect: connection refused" Dec 06 05:56:20 crc kubenswrapper[4958]: I1206 05:56:20.410978 4958 generic.go:334] "Generic (PLEG): container finished" podID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerID="c875f65c6d69d8a7bff1222c5a1842c71bdc83ff81052f08b01b31fbf735550f" exitCode=0 Dec 06 05:56:20 crc kubenswrapper[4958]: I1206 05:56:20.411030 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" event={"ID":"8066f868-45e9-4e89-a9ea-e1e269f19696","Type":"ContainerDied","Data":"c875f65c6d69d8a7bff1222c5a1842c71bdc83ff81052f08b01b31fbf735550f"} Dec 06 05:56:22 crc kubenswrapper[4958]: I1206 05:56:22.242999 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 05:56:22 crc kubenswrapper[4958]: I1206 05:56:22.282004 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:22 crc kubenswrapper[4958]: I1206 05:56:22.433569 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="cinder-scheduler" containerID="cri-o://3520d3f3fe76c7bcacc678e295ab6a758048db83fc553c4346c490b7dd72f837" gracePeriod=30 Dec 06 05:56:22 crc kubenswrapper[4958]: I1206 05:56:22.433801 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="probe" containerID="cri-o://f98d3a508f2b06161c30ee1fe2a5c6da09f4b1bc65f2b8698b415427727829ee" gracePeriod=30 Dec 06 05:56:23 crc kubenswrapper[4958]: I1206 05:56:23.470358 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerID="f98d3a508f2b06161c30ee1fe2a5c6da09f4b1bc65f2b8698b415427727829ee" exitCode=0 Dec 06 05:56:23 crc kubenswrapper[4958]: I1206 05:56:23.470391 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerDied","Data":"f98d3a508f2b06161c30ee1fe2a5c6da09f4b1bc65f2b8698b415427727829ee"} Dec 06 05:56:23 crc kubenswrapper[4958]: I1206 05:56:23.920711 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": dial tcp 10.217.0.167:9292: connect: connection refused" Dec 06 05:56:23 crc kubenswrapper[4958]: I1206 05:56:23.920800 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9292/healthcheck\": dial tcp 10.217.0.167:9292: connect: connection refused" Dec 06 05:56:24 crc kubenswrapper[4958]: I1206 05:56:24.488077 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerID="3520d3f3fe76c7bcacc678e295ab6a758048db83fc553c4346c490b7dd72f837" exitCode=0 Dec 06 05:56:24 crc kubenswrapper[4958]: I1206 05:56:24.488139 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerDied","Data":"3520d3f3fe76c7bcacc678e295ab6a758048db83fc553c4346c490b7dd72f837"} Dec 06 05:56:24 crc kubenswrapper[4958]: I1206 05:56:24.492341 4958 generic.go:334] "Generic (PLEG): container finished" podID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerID="c9f6ded91d18a5963858bc39145d677ec33fb4944d10cf6088f71f6b685bb83b" exitCode=0 Dec 06 05:56:24 crc kubenswrapper[4958]: I1206 05:56:24.492378 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerDied","Data":"c9f6ded91d18a5963858bc39145d677ec33fb4944d10cf6088f71f6b685bb83b"} Dec 06 05:56:24 crc kubenswrapper[4958]: I1206 05:56:24.519744 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.181:5353: connect: connection refused" Dec 06 05:56:25 crc kubenswrapper[4958]: I1206 05:56:25.187954 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.697545 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704137 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57br7\" (UniqueName: \"kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704271 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704339 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704369 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704409 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704528 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle\") pod \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\" (UID: \"6a7d1df9-4e7c-4d3c-b287-d70070167c9a\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.704589 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.706221 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.719702 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts" (OuterVolumeSpecName: "scripts") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.720975 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7" (OuterVolumeSpecName: "kube-api-access-57br7") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "kube-api-access-57br7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.733622 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.811063 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.811101 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.811116 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-57br7\" (UniqueName: \"kubernetes.io/projected/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-kube-api-access-57br7\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.872604 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.876326 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data" (OuterVolumeSpecName: "config-data") pod "6a7d1df9-4e7c-4d3c-b287-d70070167c9a" (UID: "6a7d1df9-4e7c-4d3c-b287-d70070167c9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.887383 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.895395 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.907801 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921531 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69hnl\" (UniqueName: \"kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921641 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921663 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921723 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921742 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921766 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921801 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921818 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwhm2\" (UniqueName: \"kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921840 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921894 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921914 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921944 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921973 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.921995 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.922021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxhb4\" (UniqueName: \"kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4\") pod \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\" (UID: \"d6c878aa-473c-4e55-bbdc-29fee05ff3a3\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.922039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.922081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.922170 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd\") pod \"64912e89-a0ce-4858-a22a-3f873669dfd2\" (UID: \"64912e89-a0ce-4858-a22a-3f873669dfd2\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.922200 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0\") pod \"8066f868-45e9-4e89-a9ea-e1e269f19696\" (UID: \"8066f868-45e9-4e89-a9ea-e1e269f19696\") " Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.924139 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.924245 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a7d1df9-4e7c-4d3c-b287-d70070167c9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.928431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs" (OuterVolumeSpecName: "logs") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.941838 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.946665 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.955638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.963012 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts" (OuterVolumeSpecName: "scripts") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.964753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts" (OuterVolumeSpecName: "scripts") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.966431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4" (OuterVolumeSpecName: "kube-api-access-kxhb4") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "kube-api-access-kxhb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.972733 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl" (OuterVolumeSpecName: "kube-api-access-69hnl") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "kube-api-access-69hnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:26 crc kubenswrapper[4958]: I1206 05:56:26.981726 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.046523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2" (OuterVolumeSpecName: "kube-api-access-qwhm2") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "kube-api-access-qwhm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051025 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051074 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxhb4\" (UniqueName: \"kubernetes.io/projected/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-kube-api-access-kxhb4\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051089 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051103 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/64912e89-a0ce-4858-a22a-3f873669dfd2-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051123 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69hnl\" (UniqueName: \"kubernetes.io/projected/64912e89-a0ce-4858-a22a-3f873669dfd2-kube-api-access-69hnl\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051136 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051148 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051187 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051200 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-httpd-run\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.051213 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwhm2\" (UniqueName: \"kubernetes.io/projected/8066f868-45e9-4e89-a9ea-e1e269f19696-kube-api-access-qwhm2\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.138905 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.146191 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.146624 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: W1206 05:56:27.146727 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf2cb807_c3e4_475e_a8fe_4ad4134e383e.slice/crio-581f0bb67137b792c3c4e45ac5c359b12a71d4c7f88b477705657cd08d782322 WatchSource:0}: Error finding container 581f0bb67137b792c3c4e45ac5c359b12a71d4c7f88b477705657cd08d782322: Status 404 returned error can't find the container with id 581f0bb67137b792c3c4e45ac5c359b12a71d4c7f88b477705657cd08d782322 Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.152866 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.152888 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.162336 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.169051 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.182795 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config" (OuterVolumeSpecName: "config") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.188158 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.189828 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.198276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data" (OuterVolumeSpecName: "config-data") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.225209 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8066f868-45e9-4e89-a9ea-e1e269f19696" (UID: "8066f868-45e9-4e89-a9ea-e1e269f19696"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.226028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.227563 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d6c878aa-473c-4e55-bbdc-29fee05ff3a3" (UID: "d6c878aa-473c-4e55-bbdc-29fee05ff3a3"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254409 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254450 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254465 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254491 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254502 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254512 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6c878aa-473c-4e55-bbdc-29fee05ff3a3-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254526 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254536 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.254547 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8066f868-45e9-4e89-a9ea-e1e269f19696-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.285634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data" (OuterVolumeSpecName: "config-data") pod "64912e89-a0ce-4858-a22a-3f873669dfd2" (UID: "64912e89-a0ce-4858-a22a-3f873669dfd2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.356682 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/64912e89-a0ce-4858-a22a-3f873669dfd2-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.528334 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2cb807-c3e4-475e-a8fe-4ad4134e383e","Type":"ContainerStarted","Data":"581f0bb67137b792c3c4e45ac5c359b12a71d4c7f88b477705657cd08d782322"} Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.533870 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"64912e89-a0ce-4858-a22a-3f873669dfd2","Type":"ContainerDied","Data":"56b2a841f574c73e5612e30058171f1c1fad37e927a3a18ebb5f5f4a0cbbd4cb"} Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.533932 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.533979 4958 scope.go:117] "RemoveContainer" containerID="1dd6f0600bec840e88cea75c27f8b55c6d343e3539d1b1b3cccf17f5693efb92" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.540814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6a7d1df9-4e7c-4d3c-b287-d70070167c9a","Type":"ContainerDied","Data":"5bacc62e1882e0ed61c9f105af7af97931a9456267066c5afbed2c0282549a66"} Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.540914 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.545615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" event={"ID":"8066f868-45e9-4e89-a9ea-e1e269f19696","Type":"ContainerDied","Data":"8114dd39a3da3922363b48d88365b53841cb02137fa1efb152f999c67f0e45b4"} Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.545723 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-549c96b4c7-gnwjx" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.552879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6c878aa-473c-4e55-bbdc-29fee05ff3a3","Type":"ContainerDied","Data":"b1edae5e9378f0ac18317b54923c67715fa6e979502af17034cc57a41e24db75"} Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.552955 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.651227 4958 scope.go:117] "RemoveContainer" containerID="adf6393aeff21c1d04d6ddc97d9c2aa8523b7960ff52a46d0408fd56d47a322a" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.685690 4958 scope.go:117] "RemoveContainer" containerID="67be8f75ccdf0af709b61db6219a2f0fd5dd5c1762edc4de5cb9e68caa20b3a5" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.711315 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.741495 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759140 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759600 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="init" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759613 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="init" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759632 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="sg-core" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759638 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="sg-core" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759655 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-notification-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759661 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-notification-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759674 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="probe" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759680 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="probe" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759695 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-central-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759702 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-central-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759717 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="cinder-scheduler" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759723 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="cinder-scheduler" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759738 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759745 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759760 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-log" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759769 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-log" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759779 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759789 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" Dec 06 05:56:27 crc kubenswrapper[4958]: E1206 05:56:27.759799 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="proxy-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759804 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="proxy-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.759996 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="sg-core" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760010 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-log" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760024 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-notification-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760036 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="proxy-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760042 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" containerName="glance-httpd" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760056 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" containerName="dnsmasq-dns" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760062 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" containerName="ceilometer-central-agent" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760086 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="cinder-scheduler" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.760101 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" containerName="probe" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.761146 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.765959 4958 scope.go:117] "RemoveContainer" containerID="c9f6ded91d18a5963858bc39145d677ec33fb4944d10cf6088f71f6b685bb83b" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.766414 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.812314 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a7d1df9-4e7c-4d3c-b287-d70070167c9a" path="/var/lib/kubelet/pods/6a7d1df9-4e7c-4d3c-b287-d70070167c9a/volumes" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.813021 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.813049 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.830321 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.843296 4958 scope.go:117] "RemoveContainer" containerID="f98d3a508f2b06161c30ee1fe2a5c6da09f4b1bc65f2b8698b415427727829ee" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.861936 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.870351 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfsd8\" (UniqueName: \"kubernetes.io/projected/73c40f99-3a46-43d5-bab4-475cd389ea2c-kube-api-access-nfsd8\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.870416 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.870952 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.871229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c40f99-3a46-43d5-bab4-475cd389ea2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.871291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.871402 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.871582 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.872340 4958 scope.go:117] "RemoveContainer" containerID="3520d3f3fe76c7bcacc678e295ab6a758048db83fc553c4346c490b7dd72f837" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.880897 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.884230 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.886845 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.886847 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.889879 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.900901 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.908613 4958 scope.go:117] "RemoveContainer" containerID="c875f65c6d69d8a7bff1222c5a1842c71bdc83ff81052f08b01b31fbf735550f" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.920523 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-549c96b4c7-gnwjx"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.928188 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.929020 4958 scope.go:117] "RemoveContainer" containerID="4ff3ee395fd1958f055bea3af91132905b3b93c06945682d2406d1b2d6b71bd9" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.929830 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.933928 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.934307 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.940073 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.959704 4958 scope.go:117] "RemoveContainer" containerID="40b1dc4197111e8b813842bb4fb3027f60c6121238deac95d6403eb96469739c" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.972893 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.972972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfsd8\" (UniqueName: \"kubernetes.io/projected/73c40f99-3a46-43d5-bab4-475cd389ea2c-kube-api-access-nfsd8\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.973004 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.973041 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c40f99-3a46-43d5-bab4-475cd389ea2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.973063 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.973104 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.973313 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/73c40f99-3a46-43d5-bab4-475cd389ea2c-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.979835 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.979881 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.980164 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-config-data\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.981109 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/73c40f99-3a46-43d5-bab4-475cd389ea2c-scripts\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:27 crc kubenswrapper[4958]: I1206 05:56:27.990701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfsd8\" (UniqueName: \"kubernetes.io/projected/73c40f99-3a46-43d5-bab4-475cd389ea2c-kube-api-access-nfsd8\") pod \"cinder-scheduler-0\" (UID: \"73c40f99-3a46-43d5-bab4-475cd389ea2c\") " pod="openstack/cinder-scheduler-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075580 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075639 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4jtj\" (UniqueName: \"kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075731 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075774 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075797 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075811 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075845 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075868 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075910 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075927 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwnsr\" (UniqueName: \"kubernetes.io/projected/8c121c3e-8e75-4122-bb0a-077bb6f305e3-kube-api-access-fwnsr\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.075944 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.076047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.076105 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.092558 4958 scope.go:117] "RemoveContainer" containerID="3e50a49fc0a13e1f97ab9ad64a5ac0e8effd26e04f5b8b3c08455857413e0efc" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.124899 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g4jtj\" (UniqueName: \"kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177522 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177570 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177586 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177639 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177678 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177693 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwnsr\" (UniqueName: \"kubernetes.io/projected/8c121c3e-8e75-4122-bb0a-077bb6f305e3-kube-api-access-fwnsr\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177741 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177778 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177820 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.177842 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.181757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.181781 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.182101 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.182100 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.182497 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-logs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.182615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c121c3e-8e75-4122-bb0a-077bb6f305e3-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.185343 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.185768 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.185932 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.192726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.193440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.195987 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.196653 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c121c3e-8e75-4122-bb0a-077bb6f305e3-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.200419 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwnsr\" (UniqueName: \"kubernetes.io/projected/8c121c3e-8e75-4122-bb0a-077bb6f305e3-kube-api-access-fwnsr\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.201350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g4jtj\" (UniqueName: \"kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj\") pod \"ceilometer-0\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.207992 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.252231 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"8c121c3e-8e75-4122-bb0a-077bb6f305e3\") " pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.384972 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.579963 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2cb807-c3e4-475e-a8fe-4ad4134e383e","Type":"ContainerStarted","Data":"248952c6e0e30f51d0ac0dbffc9026ab3feda4fc868ece6736e443612c89361b"} Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.585497 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" event={"ID":"d874f46b-0e0f-4304-8b7d-43a68d87dd5d","Type":"ContainerStarted","Data":"26d5a7814d44a883dcb479f1d33f7413a1cb6ffc6d9050d139f049035eec8ee5"} Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.655663 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 06 05:56:28 crc kubenswrapper[4958]: I1206 05:56:28.807924 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.151311 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.600359 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"73c40f99-3a46-43d5-bab4-475cd389ea2c","Type":"ContainerStarted","Data":"82d505d691d1aa1e26263db0873d6ae9d4b4be2d8755fd5beabae3249efd2ab7"} Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.602151 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerStarted","Data":"c6f20f569b3c4be33c487f4d58ab8ca8e43bc4c65fc072ca6925349757cd7241"} Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.606906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8c121c3e-8e75-4122-bb0a-077bb6f305e3","Type":"ContainerStarted","Data":"82110473235b4e4d67cfb43a8a61aa479a0add4b253c7dd00d3918db9780c2a4"} Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.624854 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" podStartSLOduration=4.826243075 podStartE2EDuration="27.624831239s" podCreationTimestamp="2025-12-06 05:56:02 +0000 UTC" firstStartedPulling="2025-12-06 05:56:03.20857273 +0000 UTC m=+1673.742343493" lastFinishedPulling="2025-12-06 05:56:26.007160884 +0000 UTC m=+1696.540931657" observedRunningTime="2025-12-06 05:56:29.621721985 +0000 UTC m=+1700.155492758" watchObservedRunningTime="2025-12-06 05:56:29.624831239 +0000 UTC m=+1700.158602002" Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.824044 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="64912e89-a0ce-4858-a22a-3f873669dfd2" path="/var/lib/kubelet/pods/64912e89-a0ce-4858-a22a-3f873669dfd2/volumes" Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.824900 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8066f868-45e9-4e89-a9ea-e1e269f19696" path="/var/lib/kubelet/pods/8066f868-45e9-4e89-a9ea-e1e269f19696/volumes" Dec 06 05:56:29 crc kubenswrapper[4958]: I1206 05:56:29.826172 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6c878aa-473c-4e55-bbdc-29fee05ff3a3" path="/var/lib/kubelet/pods/d6c878aa-473c-4e55-bbdc-29fee05ff3a3/volumes" Dec 06 05:56:30 crc kubenswrapper[4958]: I1206 05:56:30.621522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"cf2cb807-c3e4-475e-a8fe-4ad4134e383e","Type":"ContainerStarted","Data":"e31a2dd1d6ef12abc94d0e2976f363c583d276a2e5dec72e1131c2d719f1d9de"} Dec 06 05:56:30 crc kubenswrapper[4958]: I1206 05:56:30.624429 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"73c40f99-3a46-43d5-bab4-475cd389ea2c","Type":"ContainerStarted","Data":"70a3df793d9c194100e9e82ca15561f4448f19238d3542e2b3fa0aafc1db7042"} Dec 06 05:56:30 crc kubenswrapper[4958]: I1206 05:56:30.626879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8c121c3e-8e75-4122-bb0a-077bb6f305e3","Type":"ContainerStarted","Data":"eba7d05309126b39196d2046cedfd96483bdf202eddc8d5bcf32515e347962c4"} Dec 06 05:56:30 crc kubenswrapper[4958]: I1206 05:56:30.651610 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=13.65159116 podStartE2EDuration="13.65159116s" podCreationTimestamp="2025-12-06 05:56:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:30.640226601 +0000 UTC m=+1701.173997364" watchObservedRunningTime="2025-12-06 05:56:30.65159116 +0000 UTC m=+1701.185361923" Dec 06 05:56:31 crc kubenswrapper[4958]: I1206 05:56:31.640184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"73c40f99-3a46-43d5-bab4-475cd389ea2c","Type":"ContainerStarted","Data":"75de41a0dd7ea4f90e5913adde3da10f352f3765e33531bd575b74f462ce2175"} Dec 06 05:56:31 crc kubenswrapper[4958]: I1206 05:56:31.644453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerStarted","Data":"59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734"} Dec 06 05:56:31 crc kubenswrapper[4958]: I1206 05:56:31.646944 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"8c121c3e-8e75-4122-bb0a-077bb6f305e3","Type":"ContainerStarted","Data":"21a2a7ae51a561c044a1a2ca3ba0353566a8f5f59d79d3d7720e3ae5209f9c99"} Dec 06 05:56:31 crc kubenswrapper[4958]: I1206 05:56:31.674278 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.67425405 podStartE2EDuration="4.67425405s" podCreationTimestamp="2025-12-06 05:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:31.661653347 +0000 UTC m=+1702.195424140" watchObservedRunningTime="2025-12-06 05:56:31.67425405 +0000 UTC m=+1702.208024813" Dec 06 05:56:31 crc kubenswrapper[4958]: I1206 05:56:31.692729 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.6927093410000005 podStartE2EDuration="4.692709341s" podCreationTimestamp="2025-12-06 05:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:31.683532891 +0000 UTC m=+1702.217303654" watchObservedRunningTime="2025-12-06 05:56:31.692709341 +0000 UTC m=+1702.226480104" Dec 06 05:56:32 crc kubenswrapper[4958]: I1206 05:56:32.341630 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.194:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:56:32 crc kubenswrapper[4958]: I1206 05:56:32.660353 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerStarted","Data":"2f96d1da43346a4fceb3ecd0833090c549fce3630a355ff57b3d9833d9027f07"} Dec 06 05:56:32 crc kubenswrapper[4958]: I1206 05:56:32.661742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerStarted","Data":"6fea77e35e957eae4d86bac025660d351535bd1b64b24bbd7265e6554961846a"} Dec 06 05:56:33 crc kubenswrapper[4958]: I1206 05:56:33.125918 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 06 05:56:34 crc kubenswrapper[4958]: I1206 05:56:34.003885 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:34 crc kubenswrapper[4958]: I1206 05:56:34.004437 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/watcher-decision-engine-0" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" containerID="cri-o://05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3" gracePeriod=30 Dec 06 05:56:34 crc kubenswrapper[4958]: I1206 05:56:34.687588 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerStarted","Data":"c89499519e8dfae60ada3ba15e4afd200af0a2d3afaf05e8cb4cc06fba7c44de"} Dec 06 05:56:34 crc kubenswrapper[4958]: I1206 05:56:34.688155 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:56:34 crc kubenswrapper[4958]: I1206 05:56:34.715799 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.894496026 podStartE2EDuration="7.715776258s" podCreationTimestamp="2025-12-06 05:56:27 +0000 UTC" firstStartedPulling="2025-12-06 05:56:28.826112665 +0000 UTC m=+1699.359883428" lastFinishedPulling="2025-12-06 05:56:33.647392887 +0000 UTC m=+1704.181163660" observedRunningTime="2025-12-06 05:56:34.707681558 +0000 UTC m=+1705.241452361" watchObservedRunningTime="2025-12-06 05:56:34.715776258 +0000 UTC m=+1705.249547031" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.210011 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.379908 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca\") pod \"dcbe2099-3d41-4f69-be20-47d96498cb25\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.379955 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz2tw\" (UniqueName: \"kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw\") pod \"dcbe2099-3d41-4f69-be20-47d96498cb25\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.380133 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle\") pod \"dcbe2099-3d41-4f69-be20-47d96498cb25\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.380184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs\") pod \"dcbe2099-3d41-4f69-be20-47d96498cb25\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.380201 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data\") pod \"dcbe2099-3d41-4f69-be20-47d96498cb25\" (UID: \"dcbe2099-3d41-4f69-be20-47d96498cb25\") " Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.381003 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs" (OuterVolumeSpecName: "logs") pod "dcbe2099-3d41-4f69-be20-47d96498cb25" (UID: "dcbe2099-3d41-4f69-be20-47d96498cb25"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.388102 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw" (OuterVolumeSpecName: "kube-api-access-jz2tw") pod "dcbe2099-3d41-4f69-be20-47d96498cb25" (UID: "dcbe2099-3d41-4f69-be20-47d96498cb25"). InnerVolumeSpecName "kube-api-access-jz2tw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.461633 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca" (OuterVolumeSpecName: "custom-prometheus-ca") pod "dcbe2099-3d41-4f69-be20-47d96498cb25" (UID: "dcbe2099-3d41-4f69-be20-47d96498cb25"). InnerVolumeSpecName "custom-prometheus-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.482587 4958 reconciler_common.go:293] "Volume detached for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-custom-prometheus-ca\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.482621 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz2tw\" (UniqueName: \"kubernetes.io/projected/dcbe2099-3d41-4f69-be20-47d96498cb25-kube-api-access-jz2tw\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.482630 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dcbe2099-3d41-4f69-be20-47d96498cb25-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.490627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcbe2099-3d41-4f69-be20-47d96498cb25" (UID: "dcbe2099-3d41-4f69-be20-47d96498cb25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.559575 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data" (OuterVolumeSpecName: "config-data") pod "dcbe2099-3d41-4f69-be20-47d96498cb25" (UID: "dcbe2099-3d41-4f69-be20-47d96498cb25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.587628 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.587660 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcbe2099-3d41-4f69-be20-47d96498cb25-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.699613 4958 generic.go:334] "Generic (PLEG): container finished" podID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerID="05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3" exitCode=0 Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.699678 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.699692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerDied","Data":"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3"} Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.699734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"dcbe2099-3d41-4f69-be20-47d96498cb25","Type":"ContainerDied","Data":"7dcfd1cc1f7161dda42df8d8cfd542af228bef3e784b6a767cb4cdbb7097976d"} Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.699753 4958 scope.go:117] "RemoveContainer" containerID="05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.722391 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.742455 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.758253 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.777620 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" path="/var/lib/kubelet/pods/dcbe2099-3d41-4f69-be20-47d96498cb25/volumes" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.778438 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.778826 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.778917 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.778999 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779069 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.779129 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779180 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.779243 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779307 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779773 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779854 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.779928 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.780682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.783585 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"watcher-decision-engine-config-data" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.787613 4958 scope.go:117] "RemoveContainer" containerID="05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3" Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.789912 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3\": container with ID starting with 05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3 not found: ID does not exist" containerID="05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.789945 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3"} err="failed to get container status \"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3\": rpc error: code = NotFound desc = could not find container \"05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3\": container with ID starting with 05c6f6d2100232ce1e36fba508c20bbc75ae020e55452b39deecbdc47a8242b3 not found: ID does not exist" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.789969 4958 scope.go:117] "RemoveContainer" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:56:35 crc kubenswrapper[4958]: E1206 05:56:35.793995 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f\": container with ID starting with 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f not found: ID does not exist" containerID="13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.794049 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f"} err="failed to get container status \"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f\": rpc error: code = NotFound desc = could not find container \"13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f\": container with ID starting with 13d78c5d489ba01cdb45efd404a73638f5106ef6ecd2a0166cc67db5a8883d7f not found: ID does not exist" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.808782 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.892742 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-logs\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.893220 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqv7j\" (UniqueName: \"kubernetes.io/projected/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-kube-api-access-bqv7j\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.893268 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.893395 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.893515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.995359 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bqv7j\" (UniqueName: \"kubernetes.io/projected/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-kube-api-access-bqv7j\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.995421 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.995500 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.995549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.995593 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-logs\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.996131 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-logs\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.999289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-combined-ca-bundle\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:35 crc kubenswrapper[4958]: I1206 05:56:35.999602 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-config-data\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.014031 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"custom-prometheus-ca\" (UniqueName: \"kubernetes.io/secret/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-custom-prometheus-ca\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.019324 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bqv7j\" (UniqueName: \"kubernetes.io/projected/f65760d6-0cb4-4d02-8db2-9c989cb42dc2-kube-api-access-bqv7j\") pod \"watcher-decision-engine-0\" (UID: \"f65760d6-0cb4-4d02-8db2-9c989cb42dc2\") " pod="openstack/watcher-decision-engine-0" Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.098774 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.388133 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.602966 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/watcher-decision-engine-0"] Dec 06 05:56:36 crc kubenswrapper[4958]: W1206 05:56:36.617104 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf65760d6_0cb4_4d02_8db2_9c989cb42dc2.slice/crio-a34cd074533247b01f81132d1ec346e5dd98e05229eca5ae378acd16cc3b5764 WatchSource:0}: Error finding container a34cd074533247b01f81132d1ec346e5dd98e05229eca5ae378acd16cc3b5764: Status 404 returned error can't find the container with id a34cd074533247b01f81132d1ec346e5dd98e05229eca5ae378acd16cc3b5764 Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.723129 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f65760d6-0cb4-4d02-8db2-9c989cb42dc2","Type":"ContainerStarted","Data":"a34cd074533247b01f81132d1ec346e5dd98e05229eca5ae378acd16cc3b5764"} Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.725869 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-central-agent" containerID="cri-o://59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734" gracePeriod=30 Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.726372 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="sg-core" containerID="cri-o://2f96d1da43346a4fceb3ecd0833090c549fce3630a355ff57b3d9833d9027f07" gracePeriod=30 Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.726429 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-notification-agent" containerID="cri-o://6fea77e35e957eae4d86bac025660d351535bd1b64b24bbd7265e6554961846a" gracePeriod=30 Dec 06 05:56:36 crc kubenswrapper[4958]: I1206 05:56:36.726379 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="proxy-httpd" containerID="cri-o://c89499519e8dfae60ada3ba15e4afd200af0a2d3afaf05e8cb4cc06fba7c44de" gracePeriod=30 Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.617259 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.617881 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.674926 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.674999 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.735502 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/watcher-decision-engine-0" event={"ID":"f65760d6-0cb4-4d02-8db2-9c989cb42dc2","Type":"ContainerStarted","Data":"3d72200890141b170da2bf34d866fb830eb6c6ab7b4d3a086105331b346295de"} Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746186 4958 generic.go:334] "Generic (PLEG): container finished" podID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerID="c89499519e8dfae60ada3ba15e4afd200af0a2d3afaf05e8cb4cc06fba7c44de" exitCode=0 Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746217 4958 generic.go:334] "Generic (PLEG): container finished" podID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerID="2f96d1da43346a4fceb3ecd0833090c549fce3630a355ff57b3d9833d9027f07" exitCode=2 Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746226 4958 generic.go:334] "Generic (PLEG): container finished" podID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerID="6fea77e35e957eae4d86bac025660d351535bd1b64b24bbd7265e6554961846a" exitCode=0 Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746410 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerDied","Data":"c89499519e8dfae60ada3ba15e4afd200af0a2d3afaf05e8cb4cc06fba7c44de"} Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746463 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerDied","Data":"2f96d1da43346a4fceb3ecd0833090c549fce3630a355ff57b3d9833d9027f07"} Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746495 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerDied","Data":"6fea77e35e957eae4d86bac025660d351535bd1b64b24bbd7265e6554961846a"} Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746831 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.746928 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.759452 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/watcher-decision-engine-0" podStartSLOduration=2.7594304960000002 podStartE2EDuration="2.759430496s" podCreationTimestamp="2025-12-06 05:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:37.758941662 +0000 UTC m=+1708.292712435" watchObservedRunningTime="2025-12-06 05:56:37.759430496 +0000 UTC m=+1708.293201269" Dec 06 05:56:37 crc kubenswrapper[4958]: I1206 05:56:37.843409 4958 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","besteffort","podfee2c3d7-24fe-4966-878b-90147b8f5cfb"] err="unable to destroy cgroup paths for cgroup [kubepods besteffort podfee2c3d7-24fe-4966-878b-90147b8f5cfb] : Timed out while waiting for systemd to remove kubepods-besteffort-podfee2c3d7_24fe_4966_878b_90147b8f5cfb.slice" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.360292 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.389484 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.390445 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.471140 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.474065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:38 crc kubenswrapper[4958]: E1206 05:56:38.667293 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8913b52_0985_485a_a086_f99bfca2ffb9.slice/crio-conmon-59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda8913b52_0985_485a_a086_f99bfca2ffb9.slice/crio-59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.761855 4958 generic.go:334] "Generic (PLEG): container finished" podID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerID="59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734" exitCode=0 Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.762243 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerDied","Data":"59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734"} Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.762829 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:38 crc kubenswrapper[4958]: I1206 05:56:38.763130 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.019225 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088181 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088274 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088359 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088388 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g4jtj\" (UniqueName: \"kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.088533 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts\") pod \"a8913b52-0985-485a-a086-f99bfca2ffb9\" (UID: \"a8913b52-0985-485a-a086-f99bfca2ffb9\") " Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.097276 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.107664 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj" (OuterVolumeSpecName: "kube-api-access-g4jtj") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "kube-api-access-g4jtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.115730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.119092 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts" (OuterVolumeSpecName: "scripts") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.151635 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.190218 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.190247 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.190258 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.190268 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a8913b52-0985-485a-a086-f99bfca2ffb9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.190275 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g4jtj\" (UniqueName: \"kubernetes.io/projected/a8913b52-0985-485a-a086-f99bfca2ffb9-kube-api-access-g4jtj\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.234625 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data" (OuterVolumeSpecName: "config-data") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.269943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a8913b52-0985-485a-a086-f99bfca2ffb9" (UID: "a8913b52-0985-485a-a086-f99bfca2ffb9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.291361 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.291425 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8913b52-0985-485a-a086-f99bfca2ffb9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.482417 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.779309 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a8913b52-0985-485a-a086-f99bfca2ffb9","Type":"ContainerDied","Data":"c6f20f569b3c4be33c487f4d58ab8ca8e43bc4c65fc072ca6925349757cd7241"} Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.779336 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.779366 4958 scope.go:117] "RemoveContainer" containerID="c89499519e8dfae60ada3ba15e4afd200af0a2d3afaf05e8cb4cc06fba7c44de" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.831770 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.845694 4958 scope.go:117] "RemoveContainer" containerID="2f96d1da43346a4fceb3ecd0833090c549fce3630a355ff57b3d9833d9027f07" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.861198 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.872580 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.872650 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.872952 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:39 crc kubenswrapper[4958]: E1206 05:56:39.873412 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="sg-core" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873430 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="sg-core" Dec 06 05:56:39 crc kubenswrapper[4958]: E1206 05:56:39.873448 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-notification-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873458 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-notification-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: E1206 05:56:39.873533 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-central-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873544 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-central-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: E1206 05:56:39.873563 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="proxy-httpd" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873571 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="proxy-httpd" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873778 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-notification-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873795 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="proxy-httpd" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873810 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="ceilometer-central-agent" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873830 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcbe2099-3d41-4f69-be20-47d96498cb25" containerName="watcher-decision-engine" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.873844 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" containerName="sg-core" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.875883 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.878714 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.882891 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.883083 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.890741 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.890824 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" gracePeriod=600 Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.895892 4958 scope.go:117] "RemoveContainer" containerID="6fea77e35e957eae4d86bac025660d351535bd1b64b24bbd7265e6554961846a" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903071 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwdr\" (UniqueName: \"kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903098 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903165 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903196 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903249 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.903593 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:39 crc kubenswrapper[4958]: I1206 05:56:39.931335 4958 scope.go:117] "RemoveContainer" containerID="59d8c8e17fe5cd2187902fdb0bd7558fc9d8cb0057f3cffd01fc7c96d3e66734" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.004649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.004967 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.004994 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwdr\" (UniqueName: \"kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.005017 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.005056 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.005082 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.005099 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.009600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.010370 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.012032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.014236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.016374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.026253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.026962 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwdr\" (UniqueName: \"kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr\") pod \"ceilometer-0\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: E1206 05:56:40.030940 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.206854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.419982 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.420300 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.421757 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.789634 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.801775 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" exitCode=0 Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.801805 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950"} Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.802557 4958 scope.go:117] "RemoveContainer" containerID="5ef9857418407037240c969f5ea76d6cce28ae131bd31b325e367615bc600d5d" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.803596 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:56:40 crc kubenswrapper[4958]: E1206 05:56:40.804265 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.805592 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:56:40 crc kubenswrapper[4958]: I1206 05:56:40.805691 4958 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 06 05:56:41 crc kubenswrapper[4958]: I1206 05:56:41.097848 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:41 crc kubenswrapper[4958]: I1206 05:56:41.278768 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Dec 06 05:56:41 crc kubenswrapper[4958]: I1206 05:56:41.773737 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8913b52-0985-485a-a086-f99bfca2ffb9" path="/var/lib/kubelet/pods/a8913b52-0985-485a-a086-f99bfca2ffb9/volumes" Dec 06 05:56:41 crc kubenswrapper[4958]: I1206 05:56:41.815009 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerStarted","Data":"2b977679bc930a8bacc932ec0dbcfe380dab2eb6561b6445f6046e454e2d011b"} Dec 06 05:56:41 crc kubenswrapper[4958]: I1206 05:56:41.815056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerStarted","Data":"6b8270de8e9ea66876126cc731adff7d94b36bc6a16ff1f2523adbb7cec6491d"} Dec 06 05:56:42 crc kubenswrapper[4958]: I1206 05:56:42.835368 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerStarted","Data":"bf4f252089d17f58aa1adf4aacb59927707ceddf69df1131120086600b32b22e"} Dec 06 05:56:42 crc kubenswrapper[4958]: I1206 05:56:42.836047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerStarted","Data":"b7d1e655b3353feced625dba6e89598762eaeefe75694ccd74f6bd2385801bd9"} Dec 06 05:56:44 crc kubenswrapper[4958]: I1206 05:56:44.857969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerStarted","Data":"6c53ac49fb441311238f8fe7ed89927f29f0ad4bcd100d3fdc2a87cbf2aa2f50"} Dec 06 05:56:44 crc kubenswrapper[4958]: I1206 05:56:44.858589 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:56:44 crc kubenswrapper[4958]: I1206 05:56:44.887986 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.891360755 podStartE2EDuration="5.887958713s" podCreationTimestamp="2025-12-06 05:56:39 +0000 UTC" firstStartedPulling="2025-12-06 05:56:40.789187295 +0000 UTC m=+1711.322958048" lastFinishedPulling="2025-12-06 05:56:43.785785243 +0000 UTC m=+1714.319556006" observedRunningTime="2025-12-06 05:56:44.879904704 +0000 UTC m=+1715.413675467" watchObservedRunningTime="2025-12-06 05:56:44.887958713 +0000 UTC m=+1715.421729476" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.481256 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.621570 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728530 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728616 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728706 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728956 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4c7gq\" (UniqueName: \"kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.728958 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.729001 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom\") pod \"29f48971-668e-4d31-b0a1-b3c5088c3130\" (UID: \"29f48971-668e-4d31-b0a1-b3c5088c3130\") " Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.729550 4958 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/29f48971-668e-4d31-b0a1-b3c5088c3130-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.729781 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs" (OuterVolumeSpecName: "logs") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.738531 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts" (OuterVolumeSpecName: "scripts") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.746308 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d7c696d85-2npl6" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.778254 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.778256 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq" (OuterVolumeSpecName: "kube-api-access-4c7gq") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "kube-api-access-4c7gq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.831343 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.831402 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4c7gq\" (UniqueName: \"kubernetes.io/projected/29f48971-668e-4d31-b0a1-b3c5088c3130-kube-api-access-4c7gq\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.831415 4958 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.831427 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29f48971-668e-4d31-b0a1-b3c5088c3130-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.837024 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.856591 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data" (OuterVolumeSpecName: "config-data") pod "29f48971-668e-4d31-b0a1-b3c5088c3130" (UID: "29f48971-668e-4d31-b0a1-b3c5088c3130"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.866688 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.869913 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59469c77f6-x865t" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-api" containerID="cri-o://bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf" gracePeriod=30 Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.870585 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-59469c77f6-x865t" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-httpd" containerID="cri-o://b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58" gracePeriod=30 Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.876849 4958 generic.go:334] "Generic (PLEG): container finished" podID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerID="9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e" exitCode=137 Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.876886 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.876972 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerDied","Data":"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e"} Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.877001 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"29f48971-668e-4d31-b0a1-b3c5088c3130","Type":"ContainerDied","Data":"e77883bc11d4c9c23ffe73203c7c2684695b69ae76d5afaa38f376f387726f57"} Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.877016 4958 scope.go:117] "RemoveContainer" containerID="9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.911135 4958 scope.go:117] "RemoveContainer" containerID="e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.933094 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.933135 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29f48971-668e-4d31-b0a1-b3c5088c3130-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.952539 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.959617 4958 scope.go:117] "RemoveContainer" containerID="9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.962798 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:45 crc kubenswrapper[4958]: E1206 05:56:45.969740 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e\": container with ID starting with 9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e not found: ID does not exist" containerID="9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.969784 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e"} err="failed to get container status \"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e\": rpc error: code = NotFound desc = could not find container \"9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e\": container with ID starting with 9b0872b86875c20b0b0778cbc086ad8e59719fec88e482b92c841b6cfe9c3e8e not found: ID does not exist" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.969811 4958 scope.go:117] "RemoveContainer" containerID="e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b" Dec 06 05:56:45 crc kubenswrapper[4958]: E1206 05:56:45.970235 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b\": container with ID starting with e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b not found: ID does not exist" containerID="e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.970285 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b"} err="failed to get container status \"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b\": rpc error: code = NotFound desc = could not find container \"e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b\": container with ID starting with e7aedc66776098014e0d0491e638b756b68e5cc0a44abe91d87a8290974b4c9b not found: ID does not exist" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.973162 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:45 crc kubenswrapper[4958]: E1206 05:56:45.974862 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.974886 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api" Dec 06 05:56:45 crc kubenswrapper[4958]: E1206 05:56:45.974918 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api-log" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.974925 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api-log" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.975116 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api-log" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.975135 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" containerName="cinder-api" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.976087 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.979813 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.979984 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.980121 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 06 05:56:45 crc kubenswrapper[4958]: I1206 05:56:45.989532 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035646 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820bd80f-831b-4a53-bada-7fb73f7c08ab-logs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/820bd80f-831b-4a53-bada-7fb73f7c08ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035800 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljf7q\" (UniqueName: \"kubernetes.io/projected/820bd80f-831b-4a53-bada-7fb73f7c08ab-kube-api-access-ljf7q\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035827 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035944 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-scripts\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035959 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.035987 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.036046 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.099974 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.132210 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137357 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820bd80f-831b-4a53-bada-7fb73f7c08ab-logs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137400 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/820bd80f-831b-4a53-bada-7fb73f7c08ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137446 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljf7q\" (UniqueName: \"kubernetes.io/projected/820bd80f-831b-4a53-bada-7fb73f7c08ab-kube-api-access-ljf7q\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137484 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137504 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-scripts\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137520 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.137662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/820bd80f-831b-4a53-bada-7fb73f7c08ab-etc-machine-id\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.138722 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/820bd80f-831b-4a53-bada-7fb73f7c08ab-logs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.142442 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.142833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-scripts\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.143490 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data-custom\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.143578 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.145509 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-config-data\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.146404 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/820bd80f-831b-4a53-bada-7fb73f7c08ab-public-tls-certs\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.160888 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljf7q\" (UniqueName: \"kubernetes.io/projected/820bd80f-831b-4a53-bada-7fb73f7c08ab-kube-api-access-ljf7q\") pod \"cinder-api-0\" (UID: \"820bd80f-831b-4a53-bada-7fb73f7c08ab\") " pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.307013 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.795452 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.903253 4958 generic.go:334] "Generic (PLEG): container finished" podID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerID="b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58" exitCode=0 Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.903317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerDied","Data":"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58"} Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.907879 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"820bd80f-831b-4a53-bada-7fb73f7c08ab","Type":"ContainerStarted","Data":"f6cb1c3caddd53c4ed598f9418af3ae73dad7917b6f90f75573ad157480ec9d0"} Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.908635 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.908813 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-central-agent" containerID="cri-o://2b977679bc930a8bacc932ec0dbcfe380dab2eb6561b6445f6046e454e2d011b" gracePeriod=30 Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.908896 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="proxy-httpd" containerID="cri-o://6c53ac49fb441311238f8fe7ed89927f29f0ad4bcd100d3fdc2a87cbf2aa2f50" gracePeriod=30 Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.908950 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="sg-core" containerID="cri-o://bf4f252089d17f58aa1adf4aacb59927707ceddf69df1131120086600b32b22e" gracePeriod=30 Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.908996 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-notification-agent" containerID="cri-o://b7d1e655b3353feced625dba6e89598762eaeefe75694ccd74f6bd2385801bd9" gracePeriod=30 Dec 06 05:56:46 crc kubenswrapper[4958]: I1206 05:56:46.952729 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/watcher-decision-engine-0" Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.796637 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29f48971-668e-4d31-b0a1-b3c5088c3130" path="/var/lib/kubelet/pods/29f48971-668e-4d31-b0a1-b3c5088c3130/volumes" Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.923771 4958 generic.go:334] "Generic (PLEG): container finished" podID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerID="6c53ac49fb441311238f8fe7ed89927f29f0ad4bcd100d3fdc2a87cbf2aa2f50" exitCode=0 Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.924143 4958 generic.go:334] "Generic (PLEG): container finished" podID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerID="bf4f252089d17f58aa1adf4aacb59927707ceddf69df1131120086600b32b22e" exitCode=2 Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.924153 4958 generic.go:334] "Generic (PLEG): container finished" podID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerID="b7d1e655b3353feced625dba6e89598762eaeefe75694ccd74f6bd2385801bd9" exitCode=0 Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.923861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerDied","Data":"6c53ac49fb441311238f8fe7ed89927f29f0ad4bcd100d3fdc2a87cbf2aa2f50"} Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.924238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerDied","Data":"bf4f252089d17f58aa1adf4aacb59927707ceddf69df1131120086600b32b22e"} Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.924255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerDied","Data":"b7d1e655b3353feced625dba6e89598762eaeefe75694ccd74f6bd2385801bd9"} Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.928795 4958 generic.go:334] "Generic (PLEG): container finished" podID="d874f46b-0e0f-4304-8b7d-43a68d87dd5d" containerID="26d5a7814d44a883dcb479f1d33f7413a1cb6ffc6d9050d139f049035eec8ee5" exitCode=0 Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.928837 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" event={"ID":"d874f46b-0e0f-4304-8b7d-43a68d87dd5d","Type":"ContainerDied","Data":"26d5a7814d44a883dcb479f1d33f7413a1cb6ffc6d9050d139f049035eec8ee5"} Dec 06 05:56:47 crc kubenswrapper[4958]: I1206 05:56:47.931059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"820bd80f-831b-4a53-bada-7fb73f7c08ab","Type":"ContainerStarted","Data":"54e56b8cd72ab273fd0c6f84ff008ff8f0ac6f6b0153881b8962a1483aa771fd"} Dec 06 05:56:48 crc kubenswrapper[4958]: I1206 05:56:48.941688 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"820bd80f-831b-4a53-bada-7fb73f7c08ab","Type":"ContainerStarted","Data":"4661b2a73fc6d5f794df830f93acd400c457d688dcbcf3f0175a2da0d0e431cd"} Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.328482 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.351970 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.351947286 podStartE2EDuration="4.351947286s" podCreationTimestamp="2025-12-06 05:56:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:48.970118021 +0000 UTC m=+1719.503888784" watchObservedRunningTime="2025-12-06 05:56:49.351947286 +0000 UTC m=+1719.885718049" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.428075 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle\") pod \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.428144 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data\") pod \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.428262 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5vb4\" (UniqueName: \"kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4\") pod \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.428313 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts\") pod \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\" (UID: \"d874f46b-0e0f-4304-8b7d-43a68d87dd5d\") " Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.439453 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4" (OuterVolumeSpecName: "kube-api-access-w5vb4") pod "d874f46b-0e0f-4304-8b7d-43a68d87dd5d" (UID: "d874f46b-0e0f-4304-8b7d-43a68d87dd5d"). InnerVolumeSpecName "kube-api-access-w5vb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.448133 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts" (OuterVolumeSpecName: "scripts") pod "d874f46b-0e0f-4304-8b7d-43a68d87dd5d" (UID: "d874f46b-0e0f-4304-8b7d-43a68d87dd5d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.466769 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data" (OuterVolumeSpecName: "config-data") pod "d874f46b-0e0f-4304-8b7d-43a68d87dd5d" (UID: "d874f46b-0e0f-4304-8b7d-43a68d87dd5d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.468623 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d874f46b-0e0f-4304-8b7d-43a68d87dd5d" (UID: "d874f46b-0e0f-4304-8b7d-43a68d87dd5d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.530697 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.530742 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.530757 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5vb4\" (UniqueName: \"kubernetes.io/projected/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-kube-api-access-w5vb4\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.530772 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d874f46b-0e0f-4304-8b7d-43a68d87dd5d-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.951869 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.951884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-tw6sr" event={"ID":"d874f46b-0e0f-4304-8b7d-43a68d87dd5d","Type":"ContainerDied","Data":"a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7"} Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.952304 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9ebca2656080156486c2dbb78b8e5eb616ee4983e55d4e4817462eb86eb57c7" Dec 06 05:56:49 crc kubenswrapper[4958]: I1206 05:56:49.952332 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.067534 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 05:56:50 crc kubenswrapper[4958]: E1206 05:56:50.068459 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d874f46b-0e0f-4304-8b7d-43a68d87dd5d" containerName="nova-cell0-conductor-db-sync" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.068515 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d874f46b-0e0f-4304-8b7d-43a68d87dd5d" containerName="nova-cell0-conductor-db-sync" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.068975 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d874f46b-0e0f-4304-8b7d-43a68d87dd5d" containerName="nova-cell0-conductor-db-sync" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.070075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.075793 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-p6jjb" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.076108 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.089339 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.144325 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.144381 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx2bn\" (UniqueName: \"kubernetes.io/projected/019f5f33-5a9c-42c6-8379-cfc4745f5be3-kube-api-access-lx2bn\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.144427 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.245914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lx2bn\" (UniqueName: \"kubernetes.io/projected/019f5f33-5a9c-42c6-8379-cfc4745f5be3-kube-api-access-lx2bn\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.245992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.246156 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.253443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.253561 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/019f5f33-5a9c-42c6-8379-cfc4745f5be3-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.265685 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lx2bn\" (UniqueName: \"kubernetes.io/projected/019f5f33-5a9c-42c6-8379-cfc4745f5be3-kube-api-access-lx2bn\") pod \"nova-cell0-conductor-0\" (UID: \"019f5f33-5a9c-42c6-8379-cfc4745f5be3\") " pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.398211 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.803256 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 06 05:56:50 crc kubenswrapper[4958]: W1206 05:56:50.854577 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod019f5f33_5a9c_42c6_8379_cfc4745f5be3.slice/crio-52c22a3595f33e974594ea6ae3403d6676f7c19ee7957dcd3bfa86b153c9b651 WatchSource:0}: Error finding container 52c22a3595f33e974594ea6ae3403d6676f7c19ee7957dcd3bfa86b153c9b651: Status 404 returned error can't find the container with id 52c22a3595f33e974594ea6ae3403d6676f7c19ee7957dcd3bfa86b153c9b651 Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.965288 4958 generic.go:334] "Generic (PLEG): container finished" podID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerID="2b977679bc930a8bacc932ec0dbcfe380dab2eb6561b6445f6046e454e2d011b" exitCode=0 Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.965341 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerDied","Data":"2b977679bc930a8bacc932ec0dbcfe380dab2eb6561b6445f6046e454e2d011b"} Dec 06 05:56:50 crc kubenswrapper[4958]: I1206 05:56:50.966560 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"019f5f33-5a9c-42c6-8379-cfc4745f5be3","Type":"ContainerStarted","Data":"52c22a3595f33e974594ea6ae3403d6676f7c19ee7957dcd3bfa86b153c9b651"} Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.458863 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.590609 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.597855 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle\") pod \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.597946 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config\") pod \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.598030 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs\") pod \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.598058 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-www5f\" (UniqueName: \"kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f\") pod \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.598115 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config\") pod \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\" (UID: \"6f5bac58-b4db-42a1-a71b-4db14cf42c05\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.604007 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "6f5bac58-b4db-42a1-a71b-4db14cf42c05" (UID: "6f5bac58-b4db-42a1-a71b-4db14cf42c05"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.607626 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f" (OuterVolumeSpecName: "kube-api-access-www5f") pod "6f5bac58-b4db-42a1-a71b-4db14cf42c05" (UID: "6f5bac58-b4db-42a1-a71b-4db14cf42c05"). InnerVolumeSpecName "kube-api-access-www5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.658347 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6f5bac58-b4db-42a1-a71b-4db14cf42c05" (UID: "6f5bac58-b4db-42a1-a71b-4db14cf42c05"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.660498 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config" (OuterVolumeSpecName: "config") pod "6f5bac58-b4db-42a1-a71b-4db14cf42c05" (UID: "6f5bac58-b4db-42a1-a71b-4db14cf42c05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.682450 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "6f5bac58-b4db-42a1-a71b-4db14cf42c05" (UID: "6f5bac58-b4db-42a1-a71b-4db14cf42c05"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699583 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699717 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699802 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dwdr\" (UniqueName: \"kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699835 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699865 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.699903 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts\") pod \"1867571b-2567-4a8b-bf87-8f7ccb648791\" (UID: \"1867571b-2567-4a8b-bf87-8f7ccb648791\") " Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700373 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700413 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700568 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700585 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700601 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700612 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/1867571b-2567-4a8b-bf87-8f7ccb648791-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700623 4958 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700636 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-www5f\" (UniqueName: \"kubernetes.io/projected/6f5bac58-b4db-42a1-a71b-4db14cf42c05-kube-api-access-www5f\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.700648 4958 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/6f5bac58-b4db-42a1-a71b-4db14cf42c05-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.703021 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr" (OuterVolumeSpecName: "kube-api-access-9dwdr") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "kube-api-access-9dwdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.703179 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts" (OuterVolumeSpecName: "scripts") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.726691 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.774317 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.802764 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.802801 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.802814 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.802823 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dwdr\" (UniqueName: \"kubernetes.io/projected/1867571b-2567-4a8b-bf87-8f7ccb648791-kube-api-access-9dwdr\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.812302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data" (OuterVolumeSpecName: "config-data") pod "1867571b-2567-4a8b-bf87-8f7ccb648791" (UID: "1867571b-2567-4a8b-bf87-8f7ccb648791"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.904435 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1867571b-2567-4a8b-bf87-8f7ccb648791-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.978142 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"1867571b-2567-4a8b-bf87-8f7ccb648791","Type":"ContainerDied","Data":"6b8270de8e9ea66876126cc731adff7d94b36bc6a16ff1f2523adbb7cec6491d"} Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.978170 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.978187 4958 scope.go:117] "RemoveContainer" containerID="6c53ac49fb441311238f8fe7ed89927f29f0ad4bcd100d3fdc2a87cbf2aa2f50" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.981830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"019f5f33-5a9c-42c6-8379-cfc4745f5be3","Type":"ContainerStarted","Data":"07370563cdf07188af6ffed80840a915bda3a8a644191080c8b1b5cce8fed86e"} Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.982054 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.984316 4958 generic.go:334] "Generic (PLEG): container finished" podID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerID="bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf" exitCode=0 Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.984347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerDied","Data":"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf"} Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.984368 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-59469c77f6-x865t" Dec 06 05:56:51 crc kubenswrapper[4958]: I1206 05:56:51.984382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-59469c77f6-x865t" event={"ID":"6f5bac58-b4db-42a1-a71b-4db14cf42c05","Type":"ContainerDied","Data":"2d7592b243eb6af927ff0da2300082a0b1aac1011399d19b07c7101e554bb42b"} Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.001299 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.001277447 podStartE2EDuration="2.001277447s" podCreationTimestamp="2025-12-06 05:56:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:56:51.998379208 +0000 UTC m=+1722.532149991" watchObservedRunningTime="2025-12-06 05:56:52.001277447 +0000 UTC m=+1722.535048210" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.017634 4958 scope.go:117] "RemoveContainer" containerID="bf4f252089d17f58aa1adf4aacb59927707ceddf69df1131120086600b32b22e" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.019994 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.029228 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-59469c77f6-x865t"] Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.047384 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.048690 4958 scope.go:117] "RemoveContainer" containerID="b7d1e655b3353feced625dba6e89598762eaeefe75694ccd74f6bd2385801bd9" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.063314 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.078597 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.079241 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-api" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.079358 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-api" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.079433 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="sg-core" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.079534 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="sg-core" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.079623 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="proxy-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.079703 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="proxy-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.079809 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.079895 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.079996 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-central-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080203 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-central-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.080292 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-notification-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080349 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-notification-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080620 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-central-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080689 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="ceilometer-notification-agent" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080750 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="sg-core" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080820 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080884 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" containerName="proxy-httpd" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.080948 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" containerName="neutron-api" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.083562 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.088559 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.088748 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.098635 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.105992 4958 scope.go:117] "RemoveContainer" containerID="2b977679bc930a8bacc932ec0dbcfe380dab2eb6561b6445f6046e454e2d011b" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.138861 4958 scope.go:117] "RemoveContainer" containerID="b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.160248 4958 scope.go:117] "RemoveContainer" containerID="bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.182089 4958 scope.go:117] "RemoveContainer" containerID="b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.182523 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58\": container with ID starting with b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58 not found: ID does not exist" containerID="b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.182562 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58"} err="failed to get container status \"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58\": rpc error: code = NotFound desc = could not find container \"b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58\": container with ID starting with b636652eb63033081a6b0b9de5dd17b63ea69a1c488e5645b59eae735d116e58 not found: ID does not exist" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.182592 4958 scope.go:117] "RemoveContainer" containerID="bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.182811 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf\": container with ID starting with bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf not found: ID does not exist" containerID="bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.182833 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf"} err="failed to get container status \"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf\": rpc error: code = NotFound desc = could not find container \"bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf\": container with ID starting with bb57c12fca33085e8168ef5101b4917a18cb1aeadb3a4944e8a9b34282a7edbf not found: ID does not exist" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215236 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215279 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215310 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215337 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215463 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.215511 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5vng\" (UniqueName: \"kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316744 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316856 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316948 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.316996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.317028 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5vng\" (UniqueName: \"kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.317392 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.317670 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.322856 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.325005 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.326034 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.326709 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.339247 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5vng\" (UniqueName: \"kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng\") pod \"ceilometer-0\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.423701 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.762521 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:56:52 crc kubenswrapper[4958]: E1206 05:56:52.763095 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:56:52 crc kubenswrapper[4958]: I1206 05:56:52.938844 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:56:52 crc kubenswrapper[4958]: W1206 05:56:52.942754 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c WatchSource:0}: Error finding container 215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c: Status 404 returned error can't find the container with id 215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c Dec 06 05:56:53 crc kubenswrapper[4958]: I1206 05:56:53.014205 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerStarted","Data":"215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c"} Dec 06 05:56:53 crc kubenswrapper[4958]: I1206 05:56:53.773001 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1867571b-2567-4a8b-bf87-8f7ccb648791" path="/var/lib/kubelet/pods/1867571b-2567-4a8b-bf87-8f7ccb648791/volumes" Dec 06 05:56:53 crc kubenswrapper[4958]: I1206 05:56:53.774089 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6f5bac58-b4db-42a1-a71b-4db14cf42c05" path="/var/lib/kubelet/pods/6f5bac58-b4db-42a1-a71b-4db14cf42c05/volumes" Dec 06 05:56:54 crc kubenswrapper[4958]: I1206 05:56:54.030143 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerStarted","Data":"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84"} Dec 06 05:56:54 crc kubenswrapper[4958]: I1206 05:56:54.030494 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerStarted","Data":"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8"} Dec 06 05:56:55 crc kubenswrapper[4958]: I1206 05:56:55.042372 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerStarted","Data":"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b"} Dec 06 05:56:57 crc kubenswrapper[4958]: I1206 05:56:57.060620 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerStarted","Data":"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18"} Dec 06 05:56:57 crc kubenswrapper[4958]: I1206 05:56:57.061235 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:56:57 crc kubenswrapper[4958]: I1206 05:56:57.090375 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.923360607 podStartE2EDuration="5.090355146s" podCreationTimestamp="2025-12-06 05:56:52 +0000 UTC" firstStartedPulling="2025-12-06 05:56:52.956561066 +0000 UTC m=+1723.490331829" lastFinishedPulling="2025-12-06 05:56:56.123555605 +0000 UTC m=+1726.657326368" observedRunningTime="2025-12-06 05:56:57.086738729 +0000 UTC m=+1727.620509492" watchObservedRunningTime="2025-12-06 05:56:57.090355146 +0000 UTC m=+1727.624125909" Dec 06 05:56:58 crc kubenswrapper[4958]: I1206 05:56:58.284018 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 06 05:57:00 crc kubenswrapper[4958]: I1206 05:57:00.426758 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 06 05:57:00 crc kubenswrapper[4958]: I1206 05:57:00.990092 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-8wfh5"] Dec 06 05:57:00 crc kubenswrapper[4958]: I1206 05:57:00.991393 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:00 crc kubenswrapper[4958]: I1206 05:57:00.998244 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 06 05:57:00 crc kubenswrapper[4958]: I1206 05:57:00.998593 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.020572 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8wfh5"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.109696 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.110082 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs8g5\" (UniqueName: \"kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.110122 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.110211 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.153631 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.155908 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.160256 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.185283 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.211665 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.211715 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rs8g5\" (UniqueName: \"kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.211738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.211772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.224289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.229214 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.231090 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.236658 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.244821 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.248548 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.250360 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.250622 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rs8g5\" (UniqueName: \"kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5\") pod \"nova-cell0-cell-mapping-8wfh5\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.314261 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.314299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.314349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.314387 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7k72\" (UniqueName: \"kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.323542 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.367001 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.368669 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.416054 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.417413 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420649 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d7k72\" (UniqueName: \"kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420781 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420916 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92ntn\" (UniqueName: \"kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420946 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.420970 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.421070 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.421090 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.421819 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.422296 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.426439 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.429370 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.434967 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.453990 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.466744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7k72\" (UniqueName: \"kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72\") pod \"nova-api-0\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.486179 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.522923 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.522972 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523039 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523064 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523132 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92ntn\" (UniqueName: \"kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523153 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523169 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523185 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523199 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523221 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523237 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgqgb\" (UniqueName: \"kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.523254 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7jqm\" (UniqueName: \"kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.526831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.529801 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.531219 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.565805 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92ntn\" (UniqueName: \"kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn\") pod \"nova-metadata-0\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626135 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626175 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626200 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626224 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgqgb\" (UniqueName: \"kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626245 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7jqm\" (UniqueName: \"kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626336 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626355 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626400 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.626420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.627586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.627909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.628166 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.628304 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.629397 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.630082 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.630611 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.651128 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7jqm\" (UniqueName: \"kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm\") pod \"dnsmasq-dns-844fc57f6f-vfb6w\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.666493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgqgb\" (UniqueName: \"kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb\") pod \"nova-scheduler-0\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.798178 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.815075 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.828265 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:01 crc kubenswrapper[4958]: I1206 05:57:01.927828 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-8wfh5"] Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.058939 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:03 crc kubenswrapper[4958]: W1206 05:57:02.094585 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2501d279_0840_4643_b041_17b5f8a9620b.slice/crio-bbd0e67d89d011c6579e2419de211729f88556b3e11402653de338edbf4dc479 WatchSource:0}: Error finding container bbd0e67d89d011c6579e2419de211729f88556b3e11402653de338edbf4dc479: Status 404 returned error can't find the container with id bbd0e67d89d011c6579e2419de211729f88556b3e11402653de338edbf4dc479 Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.146813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8wfh5" event={"ID":"35bc6ed4-e3c1-4b61-b522-3029d77819a9","Type":"ContainerStarted","Data":"24d89b7b06484ec85a58ed7f024f056add4500dc74c6f0d90bd34c40f6611db6"} Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.148916 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerStarted","Data":"bbd0e67d89d011c6579e2419de211729f88556b3e11402653de338edbf4dc479"} Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.205854 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.207148 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.210813 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.259831 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.343340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.343724 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.343866 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfj27\" (UniqueName: \"kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.445521 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.445609 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfj27\" (UniqueName: \"kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.445652 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.450972 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.451135 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.468507 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfj27\" (UniqueName: \"kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27\") pod \"nova-cell1-novncproxy-0\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.593072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.848199 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w6tsn"] Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.849688 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.857848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w6tsn"] Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.860796 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.860930 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.956724 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4kz\" (UniqueName: \"kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.956784 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.956848 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:02.956881 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.058617 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p4kz\" (UniqueName: \"kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.058661 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.058722 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.058754 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.064694 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.064811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.064896 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.075775 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p4kz\" (UniqueName: \"kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz\") pod \"nova-cell1-conductor-db-sync-w6tsn\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.172303 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8wfh5" event={"ID":"35bc6ed4-e3c1-4b61-b522-3029d77819a9","Type":"ContainerStarted","Data":"3b96a2d2d369a5a260a8c43cd4fa12019837af83e7d1a3b1c4faee246b559751"} Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.200878 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-8wfh5" podStartSLOduration=3.20085724 podStartE2EDuration="3.20085724s" podCreationTimestamp="2025-12-06 05:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:03.192993277 +0000 UTC m=+1733.726764040" watchObservedRunningTime="2025-12-06 05:57:03.20085724 +0000 UTC m=+1733.734628003" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.221101 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:03 crc kubenswrapper[4958]: I1206 05:57:03.952695 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:04 crc kubenswrapper[4958]: I1206 05:57:04.115848 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w6tsn"] Dec 06 05:57:04 crc kubenswrapper[4958]: I1206 05:57:04.129858 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:04 crc kubenswrapper[4958]: I1206 05:57:04.162617 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:04 crc kubenswrapper[4958]: I1206 05:57:04.171385 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:04 crc kubenswrapper[4958]: W1206 05:57:04.476863 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2240dee5_48bc_4ce5_a802_3b3f6c38ef64.slice/crio-2f7b5df1e287bc135e575d87cc211e350a7a463daf42ec7fd2651d1faee63cfb WatchSource:0}: Error finding container 2f7b5df1e287bc135e575d87cc211e350a7a463daf42ec7fd2651d1faee63cfb: Status 404 returned error can't find the container with id 2f7b5df1e287bc135e575d87cc211e350a7a463daf42ec7fd2651d1faee63cfb Dec 06 05:57:04 crc kubenswrapper[4958]: W1206 05:57:04.477843 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c5ea63_6267_4f3a_9ce3_70ea211f8645.slice/crio-ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921 WatchSource:0}: Error finding container ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921: Status 404 returned error can't find the container with id ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921 Dec 06 05:57:04 crc kubenswrapper[4958]: W1206 05:57:04.481253 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70849ca2_0c84_4cb0_8366_0d24a78af2db.slice/crio-199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193 WatchSource:0}: Error finding container 199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193: Status 404 returned error can't find the container with id 199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193 Dec 06 05:57:04 crc kubenswrapper[4958]: W1206 05:57:04.484205 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7216f9bf_c51a_419f_860d_4be494b44376.slice/crio-faecef8e9409116e0f6667e90aed757c95a9660ac9d6dfb3d2a5b373a0590eb7 WatchSource:0}: Error finding container faecef8e9409116e0f6667e90aed757c95a9660ac9d6dfb3d2a5b373a0590eb7: Status 404 returned error can't find the container with id faecef8e9409116e0f6667e90aed757c95a9660ac9d6dfb3d2a5b373a0590eb7 Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.200728 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" event={"ID":"7216f9bf-c51a-419f-860d-4be494b44376","Type":"ContainerStarted","Data":"faecef8e9409116e0f6667e90aed757c95a9660ac9d6dfb3d2a5b373a0590eb7"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.202731 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerStarted","Data":"5b40c8a1b338bfac41998484780e0a19f184bfb2330ec8ede071fc1d901c1cc9"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.208311 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2240dee5-48bc-4ce5-a802-3b3f6c38ef64","Type":"ContainerStarted","Data":"2f7b5df1e287bc135e575d87cc211e350a7a463daf42ec7fd2651d1faee63cfb"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.210458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" event={"ID":"b0799b23-7b1b-4c6e-a99a-da256942169c","Type":"ContainerStarted","Data":"5362d9f0f928ae5abf6f072c84daaf13363c1bfa2572675bdb7721924d4b68a8"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.215126 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerStarted","Data":"ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.218194 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70849ca2-0c84-4cb0-8366-0d24a78af2db","Type":"ContainerStarted","Data":"199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193"} Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.306449 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.373780 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:05 crc kubenswrapper[4958]: I1206 05:57:05.762827 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:57:05 crc kubenswrapper[4958]: E1206 05:57:05.763360 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.232513 4958 generic.go:334] "Generic (PLEG): container finished" podID="7216f9bf-c51a-419f-860d-4be494b44376" containerID="ab4ecf7e585c878cb41d8444f64c5c14d89c58d842bff4238153abdfdbb78394" exitCode=0 Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.232603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" event={"ID":"7216f9bf-c51a-419f-860d-4be494b44376","Type":"ContainerDied","Data":"ab4ecf7e585c878cb41d8444f64c5c14d89c58d842bff4238153abdfdbb78394"} Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.237157 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerStarted","Data":"929090c8353c2fa1093c3e1a5f303204acdba3e4081c5effa49b5e9ba73509c1"} Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.240532 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" event={"ID":"b0799b23-7b1b-4c6e-a99a-da256942169c","Type":"ContainerStarted","Data":"e5ad34d9784c2aa5962bb866d7483252462c0ee0b5993c203c97b6587e79b2b2"} Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.248600 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerStarted","Data":"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c"} Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.248747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerStarted","Data":"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401"} Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.248707 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-log" containerID="cri-o://6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401" gracePeriod=30 Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.248911 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-metadata" containerID="cri-o://da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c" gracePeriod=30 Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.275132 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.853929237 podStartE2EDuration="5.275111419s" podCreationTimestamp="2025-12-06 05:57:01 +0000 UTC" firstStartedPulling="2025-12-06 05:57:02.113704479 +0000 UTC m=+1732.647475232" lastFinishedPulling="2025-12-06 05:57:04.534886651 +0000 UTC m=+1735.068657414" observedRunningTime="2025-12-06 05:57:06.265733464 +0000 UTC m=+1736.799504227" watchObservedRunningTime="2025-12-06 05:57:06.275111419 +0000 UTC m=+1736.808882182" Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.290248 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" podStartSLOduration=4.29022994 podStartE2EDuration="4.29022994s" podCreationTimestamp="2025-12-06 05:57:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:06.282060098 +0000 UTC m=+1736.815830881" watchObservedRunningTime="2025-12-06 05:57:06.29022994 +0000 UTC m=+1736.824000703" Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.337372 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.913609806 podStartE2EDuration="5.33735096s" podCreationTimestamp="2025-12-06 05:57:01 +0000 UTC" firstStartedPulling="2025-12-06 05:57:04.480183285 +0000 UTC m=+1735.013954048" lastFinishedPulling="2025-12-06 05:57:04.903924439 +0000 UTC m=+1735.437695202" observedRunningTime="2025-12-06 05:57:06.308193279 +0000 UTC m=+1736.841964042" watchObservedRunningTime="2025-12-06 05:57:06.33735096 +0000 UTC m=+1736.871121723" Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.801720 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:57:06 crc kubenswrapper[4958]: I1206 05:57:06.802179 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:57:07 crc kubenswrapper[4958]: I1206 05:57:07.265544 4958 generic.go:334] "Generic (PLEG): container finished" podID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerID="6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401" exitCode=143 Dec 06 05:57:07 crc kubenswrapper[4958]: I1206 05:57:07.266376 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerDied","Data":"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401"} Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.276025 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70849ca2-0c84-4cb0-8366-0d24a78af2db","Type":"ContainerStarted","Data":"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1"} Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.278439 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" event={"ID":"7216f9bf-c51a-419f-860d-4be494b44376","Type":"ContainerStarted","Data":"febec00c597f9534f573d3e92881a43531eb3fcef933c048620082cb07a3a5fc"} Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.280680 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2240dee5-48bc-4ce5-a802-3b3f6c38ef64","Type":"ContainerStarted","Data":"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd"} Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.280790 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd" gracePeriod=30 Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.299882 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=4.36799287 podStartE2EDuration="7.299857779s" podCreationTimestamp="2025-12-06 05:57:01 +0000 UTC" firstStartedPulling="2025-12-06 05:57:04.484983025 +0000 UTC m=+1735.018753788" lastFinishedPulling="2025-12-06 05:57:07.416847934 +0000 UTC m=+1737.950618697" observedRunningTime="2025-12-06 05:57:08.294528754 +0000 UTC m=+1738.828299517" watchObservedRunningTime="2025-12-06 05:57:08.299857779 +0000 UTC m=+1738.833628552" Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.342599 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" podStartSLOduration=7.3425799099999995 podStartE2EDuration="7.34257991s" podCreationTimestamp="2025-12-06 05:57:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:08.338360235 +0000 UTC m=+1738.872131008" watchObservedRunningTime="2025-12-06 05:57:08.34257991 +0000 UTC m=+1738.876350673" Dec 06 05:57:08 crc kubenswrapper[4958]: I1206 05:57:08.345133 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.408149621 podStartE2EDuration="6.345122928s" podCreationTimestamp="2025-12-06 05:57:02 +0000 UTC" firstStartedPulling="2025-12-06 05:57:04.479589969 +0000 UTC m=+1735.013360732" lastFinishedPulling="2025-12-06 05:57:07.416563266 +0000 UTC m=+1737.950334039" observedRunningTime="2025-12-06 05:57:08.315055772 +0000 UTC m=+1738.848826565" watchObservedRunningTime="2025-12-06 05:57:08.345122928 +0000 UTC m=+1738.878893691" Dec 06 05:57:09 crc kubenswrapper[4958]: I1206 05:57:09.298670 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:11 crc kubenswrapper[4958]: I1206 05:57:11.486722 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:11 crc kubenswrapper[4958]: I1206 05:57:11.487244 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:11 crc kubenswrapper[4958]: I1206 05:57:11.828817 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 05:57:11 crc kubenswrapper[4958]: I1206 05:57:11.830133 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 05:57:11 crc kubenswrapper[4958]: I1206 05:57:11.863726 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 05:57:12 crc kubenswrapper[4958]: I1206 05:57:12.420991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 05:57:12 crc kubenswrapper[4958]: I1206 05:57:12.570746 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:12 crc kubenswrapper[4958]: I1206 05:57:12.570910 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.209:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:12 crc kubenswrapper[4958]: I1206 05:57:12.593888 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:13 crc kubenswrapper[4958]: I1206 05:57:13.335609 4958 generic.go:334] "Generic (PLEG): container finished" podID="35bc6ed4-e3c1-4b61-b522-3029d77819a9" containerID="3b96a2d2d369a5a260a8c43cd4fa12019837af83e7d1a3b1c4faee246b559751" exitCode=0 Dec 06 05:57:13 crc kubenswrapper[4958]: I1206 05:57:13.335704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8wfh5" event={"ID":"35bc6ed4-e3c1-4b61-b522-3029d77819a9","Type":"ContainerDied","Data":"3b96a2d2d369a5a260a8c43cd4fa12019837af83e7d1a3b1c4faee246b559751"} Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.724944 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.888082 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts\") pod \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.888267 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle\") pod \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.888289 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data\") pod \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.888382 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rs8g5\" (UniqueName: \"kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5\") pod \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\" (UID: \"35bc6ed4-e3c1-4b61-b522-3029d77819a9\") " Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.893340 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts" (OuterVolumeSpecName: "scripts") pod "35bc6ed4-e3c1-4b61-b522-3029d77819a9" (UID: "35bc6ed4-e3c1-4b61-b522-3029d77819a9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.893714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5" (OuterVolumeSpecName: "kube-api-access-rs8g5") pod "35bc6ed4-e3c1-4b61-b522-3029d77819a9" (UID: "35bc6ed4-e3c1-4b61-b522-3029d77819a9"). InnerVolumeSpecName "kube-api-access-rs8g5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.915111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "35bc6ed4-e3c1-4b61-b522-3029d77819a9" (UID: "35bc6ed4-e3c1-4b61-b522-3029d77819a9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.925635 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data" (OuterVolumeSpecName: "config-data") pod "35bc6ed4-e3c1-4b61-b522-3029d77819a9" (UID: "35bc6ed4-e3c1-4b61-b522-3029d77819a9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.990799 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.990840 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.990854 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rs8g5\" (UniqueName: \"kubernetes.io/projected/35bc6ed4-e3c1-4b61-b522-3029d77819a9-kube-api-access-rs8g5\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:14 crc kubenswrapper[4958]: I1206 05:57:14.990866 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/35bc6ed4-e3c1-4b61-b522-3029d77819a9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.358739 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-8wfh5" event={"ID":"35bc6ed4-e3c1-4b61-b522-3029d77819a9","Type":"ContainerDied","Data":"24d89b7b06484ec85a58ed7f024f056add4500dc74c6f0d90bd34c40f6611db6"} Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.358780 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24d89b7b06484ec85a58ed7f024f056add4500dc74c6f0d90bd34c40f6611db6" Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.358871 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-8wfh5" Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.535494 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.536079 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-log" containerID="cri-o://5b40c8a1b338bfac41998484780e0a19f184bfb2330ec8ede071fc1d901c1cc9" gracePeriod=30 Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.536136 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-api" containerID="cri-o://929090c8353c2fa1093c3e1a5f303204acdba3e4081c5effa49b5e9ba73509c1" gracePeriod=30 Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.552591 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:15 crc kubenswrapper[4958]: I1206 05:57:15.552795 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerName="nova-scheduler-scheduler" containerID="cri-o://6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" gracePeriod=30 Dec 06 05:57:16 crc kubenswrapper[4958]: I1206 05:57:16.370350 4958 generic.go:334] "Generic (PLEG): container finished" podID="2501d279-0840-4643-b041-17b5f8a9620b" containerID="5b40c8a1b338bfac41998484780e0a19f184bfb2330ec8ede071fc1d901c1cc9" exitCode=143 Dec 06 05:57:16 crc kubenswrapper[4958]: I1206 05:57:16.370398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerDied","Data":"5b40c8a1b338bfac41998484780e0a19f184bfb2330ec8ede071fc1d901c1cc9"} Dec 06 05:57:16 crc kubenswrapper[4958]: I1206 05:57:16.816638 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:16 crc kubenswrapper[4958]: E1206 05:57:16.830857 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 05:57:16 crc kubenswrapper[4958]: E1206 05:57:16.835569 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 05:57:16 crc kubenswrapper[4958]: E1206 05:57:16.841289 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 06 05:57:16 crc kubenswrapper[4958]: E1206 05:57:16.841353 4958 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerName="nova-scheduler-scheduler" Dec 06 05:57:16 crc kubenswrapper[4958]: I1206 05:57:16.877068 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:57:16 crc kubenswrapper[4958]: I1206 05:57:16.877283 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-75958fc765-7vts5" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="dnsmasq-dns" containerID="cri-o://d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6" gracePeriod=10 Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.387250 4958 generic.go:334] "Generic (PLEG): container finished" podID="b0799b23-7b1b-4c6e-a99a-da256942169c" containerID="e5ad34d9784c2aa5962bb866d7483252462c0ee0b5993c203c97b6587e79b2b2" exitCode=0 Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.387325 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" event={"ID":"b0799b23-7b1b-4c6e-a99a-da256942169c","Type":"ContainerDied","Data":"e5ad34d9784c2aa5962bb866d7483252462c0ee0b5993c203c97b6587e79b2b2"} Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.400492 4958 generic.go:334] "Generic (PLEG): container finished" podID="2501d279-0840-4643-b041-17b5f8a9620b" containerID="929090c8353c2fa1093c3e1a5f303204acdba3e4081c5effa49b5e9ba73509c1" exitCode=0 Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.400571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerDied","Data":"929090c8353c2fa1093c3e1a5f303204acdba3e4081c5effa49b5e9ba73509c1"} Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.674637 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.747479 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle\") pod \"2501d279-0840-4643-b041-17b5f8a9620b\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.747577 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data\") pod \"2501d279-0840-4643-b041-17b5f8a9620b\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.747608 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs\") pod \"2501d279-0840-4643-b041-17b5f8a9620b\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.747655 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7k72\" (UniqueName: \"kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72\") pod \"2501d279-0840-4643-b041-17b5f8a9620b\" (UID: \"2501d279-0840-4643-b041-17b5f8a9620b\") " Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.749195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs" (OuterVolumeSpecName: "logs") pod "2501d279-0840-4643-b041-17b5f8a9620b" (UID: "2501d279-0840-4643-b041-17b5f8a9620b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.753745 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72" (OuterVolumeSpecName: "kube-api-access-d7k72") pod "2501d279-0840-4643-b041-17b5f8a9620b" (UID: "2501d279-0840-4643-b041-17b5f8a9620b"). InnerVolumeSpecName "kube-api-access-d7k72". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.771530 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:57:17 crc kubenswrapper[4958]: E1206 05:57:17.771763 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.783513 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data" (OuterVolumeSpecName: "config-data") pod "2501d279-0840-4643-b041-17b5f8a9620b" (UID: "2501d279-0840-4643-b041-17b5f8a9620b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.783797 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2501d279-0840-4643-b041-17b5f8a9620b" (UID: "2501d279-0840-4643-b041-17b5f8a9620b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.863200 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.863332 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2501d279-0840-4643-b041-17b5f8a9620b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.863417 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2501d279-0840-4643-b041-17b5f8a9620b-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:17 crc kubenswrapper[4958]: I1206 05:57:17.863502 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d7k72\" (UniqueName: \"kubernetes.io/projected/2501d279-0840-4643-b041-17b5f8a9620b-kube-api-access-d7k72\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.203668 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.370806 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.370848 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.370926 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.370953 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.371021 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.371041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb\") pod \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\" (UID: \"8171df7d-9d9e-4c16-ab55-8a2401a919d2\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.381948 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w" (OuterVolumeSpecName: "kube-api-access-swm2w") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "kube-api-access-swm2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.420975 4958 generic.go:334] "Generic (PLEG): container finished" podID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerID="d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6" exitCode=0 Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.421083 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-75958fc765-7vts5" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.421749 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-7vts5" event={"ID":"8171df7d-9d9e-4c16-ab55-8a2401a919d2","Type":"ContainerDied","Data":"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6"} Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.421773 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-75958fc765-7vts5" event={"ID":"8171df7d-9d9e-4c16-ab55-8a2401a919d2","Type":"ContainerDied","Data":"6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a"} Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.421787 4958 scope.go:117] "RemoveContainer" containerID="d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.424034 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.424158 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2501d279-0840-4643-b041-17b5f8a9620b","Type":"ContainerDied","Data":"bbd0e67d89d011c6579e2419de211729f88556b3e11402653de338edbf4dc479"} Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.437627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.443859 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config" (OuterVolumeSpecName: "config") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.448151 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.452091 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.462170 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8171df7d-9d9e-4c16-ab55-8a2401a919d2" (UID: "8171df7d-9d9e-4c16-ab55-8a2401a919d2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473844 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473886 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473899 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473911 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473923 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8171df7d-9d9e-4c16-ab55-8a2401a919d2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.473936 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swm2w\" (UniqueName: \"kubernetes.io/projected/8171df7d-9d9e-4c16-ab55-8a2401a919d2-kube-api-access-swm2w\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.566398 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.579659 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.587115 4958 scope.go:117] "RemoveContainer" containerID="f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.601925 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.602478 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-log" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602508 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-log" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.602535 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="35bc6ed4-e3c1-4b61-b522-3029d77819a9" containerName="nova-manage" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602544 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="35bc6ed4-e3c1-4b61-b522-3029d77819a9" containerName="nova-manage" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.602558 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="init" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602565 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="init" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.602579 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="dnsmasq-dns" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602586 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="dnsmasq-dns" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.602602 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-api" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602609 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-api" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602816 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" containerName="dnsmasq-dns" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602832 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="35bc6ed4-e3c1-4b61-b522-3029d77819a9" containerName="nova-manage" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602847 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-log" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.602873 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2501d279-0840-4643-b041-17b5f8a9620b" containerName="nova-api-api" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.604105 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.606436 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.610211 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.627224 4958 scope.go:117] "RemoveContainer" containerID="d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.627771 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6\": container with ID starting with d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6 not found: ID does not exist" containerID="d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.627858 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6"} err="failed to get container status \"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6\": rpc error: code = NotFound desc = could not find container \"d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6\": container with ID starting with d03e5a96e80ee042d243313eb005b58c4611ecd5706c3f79710a102327986ac6 not found: ID does not exist" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.627933 4958 scope.go:117] "RemoveContainer" containerID="f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c" Dec 06 05:57:18 crc kubenswrapper[4958]: E1206 05:57:18.628250 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c\": container with ID starting with f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c not found: ID does not exist" containerID="f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.628333 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c"} err="failed to get container status \"f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c\": rpc error: code = NotFound desc = could not find container \"f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c\": container with ID starting with f1e006dcf41270637f9f6e507be537e7df9e49d14e9cfc4f908af852edc1144c not found: ID does not exist" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.628399 4958 scope.go:117] "RemoveContainer" containerID="929090c8353c2fa1093c3e1a5f303204acdba3e4081c5effa49b5e9ba73509c1" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.677249 4958 scope.go:117] "RemoveContainer" containerID="5b40c8a1b338bfac41998484780e0a19f184bfb2330ec8ede071fc1d901c1cc9" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.696938 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.768216 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.778069 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-75958fc765-7vts5"] Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.778400 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts\") pod \"b0799b23-7b1b-4c6e-a99a-da256942169c\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.778680 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4kz\" (UniqueName: \"kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz\") pod \"b0799b23-7b1b-4c6e-a99a-da256942169c\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.778738 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle\") pod \"b0799b23-7b1b-4c6e-a99a-da256942169c\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.778786 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data\") pod \"b0799b23-7b1b-4c6e-a99a-da256942169c\" (UID: \"b0799b23-7b1b-4c6e-a99a-da256942169c\") " Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.779181 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.779217 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.779284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4chp2\" (UniqueName: \"kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.779455 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.781753 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts" (OuterVolumeSpecName: "scripts") pod "b0799b23-7b1b-4c6e-a99a-da256942169c" (UID: "b0799b23-7b1b-4c6e-a99a-da256942169c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.782853 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz" (OuterVolumeSpecName: "kube-api-access-8p4kz") pod "b0799b23-7b1b-4c6e-a99a-da256942169c" (UID: "b0799b23-7b1b-4c6e-a99a-da256942169c"). InnerVolumeSpecName "kube-api-access-8p4kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.811372 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0799b23-7b1b-4c6e-a99a-da256942169c" (UID: "b0799b23-7b1b-4c6e-a99a-da256942169c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.811886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data" (OuterVolumeSpecName: "config-data") pod "b0799b23-7b1b-4c6e-a99a-da256942169c" (UID: "b0799b23-7b1b-4c6e-a99a-da256942169c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.880932 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881420 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881511 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4chp2\" (UniqueName: \"kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881637 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8p4kz\" (UniqueName: \"kubernetes.io/projected/b0799b23-7b1b-4c6e-a99a-da256942169c-kube-api-access-8p4kz\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881656 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881668 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.881678 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0799b23-7b1b-4c6e-a99a-da256942169c-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.882220 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.885547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.887097 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.896699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4chp2\" (UniqueName: \"kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2\") pod \"nova-api-0\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " pod="openstack/nova-api-0" Dec 06 05:57:18 crc kubenswrapper[4958]: I1206 05:57:18.928991 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.425426 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.444912 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" event={"ID":"b0799b23-7b1b-4c6e-a99a-da256942169c","Type":"ContainerDied","Data":"5362d9f0f928ae5abf6f072c84daaf13363c1bfa2572675bdb7721924d4b68a8"} Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.444947 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5362d9f0f928ae5abf6f072c84daaf13363c1bfa2572675bdb7721924d4b68a8" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.444999 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-w6tsn" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.489091 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 05:57:19 crc kubenswrapper[4958]: E1206 05:57:19.489765 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0799b23-7b1b-4c6e-a99a-da256942169c" containerName="nova-cell1-conductor-db-sync" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.489790 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0799b23-7b1b-4c6e-a99a-da256942169c" containerName="nova-cell1-conductor-db-sync" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.490020 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0799b23-7b1b-4c6e-a99a-da256942169c" containerName="nova-cell1-conductor-db-sync" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.490907 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.497779 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.499306 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.600737 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2drx\" (UniqueName: \"kubernetes.io/projected/47fce56d-0e48-44d4-a30e-14d412fb727f-kube-api-access-s2drx\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.600801 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.600846 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.702724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.702946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2drx\" (UniqueName: \"kubernetes.io/projected/47fce56d-0e48-44d4-a30e-14d412fb727f-kube-api-access-s2drx\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.702987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.709032 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.711777 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/47fce56d-0e48-44d4-a30e-14d412fb727f-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.719870 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2drx\" (UniqueName: \"kubernetes.io/projected/47fce56d-0e48-44d4-a30e-14d412fb727f-kube-api-access-s2drx\") pod \"nova-cell1-conductor-0\" (UID: \"47fce56d-0e48-44d4-a30e-14d412fb727f\") " pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.780223 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2501d279-0840-4643-b041-17b5f8a9620b" path="/var/lib/kubelet/pods/2501d279-0840-4643-b041-17b5f8a9620b/volumes" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.781075 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8171df7d-9d9e-4c16-ab55-8a2401a919d2" path="/var/lib/kubelet/pods/8171df7d-9d9e-4c16-ab55-8a2401a919d2/volumes" Dec 06 05:57:19 crc kubenswrapper[4958]: I1206 05:57:19.867155 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.453769 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.463697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerStarted","Data":"fc95aaa5792096a3db35f3d6a8c186cdf56de9d75bcf4fc8a6aabfbdb826f153"} Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.463753 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerStarted","Data":"2c60679de295806793caa7a83821e8b7bca6ff46418ee79fd39650f841d2c84c"} Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.463763 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerStarted","Data":"25be768155bbab0bd017f46f20107613bb788693f982abe2cbb4d51151b11abb"} Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.469600 4958 generic.go:334] "Generic (PLEG): container finished" podID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" exitCode=0 Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.469642 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70849ca2-0c84-4cb0-8366-0d24a78af2db","Type":"ContainerDied","Data":"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1"} Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.469670 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"70849ca2-0c84-4cb0-8366-0d24a78af2db","Type":"ContainerDied","Data":"199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193"} Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.469690 4958 scope.go:117] "RemoveContainer" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.469818 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.503481 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.503444294 podStartE2EDuration="2.503444294s" podCreationTimestamp="2025-12-06 05:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:20.496928216 +0000 UTC m=+1751.030698989" watchObservedRunningTime="2025-12-06 05:57:20.503444294 +0000 UTC m=+1751.037215057" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.523966 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.530062 4958 scope.go:117] "RemoveContainer" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" Dec 06 05:57:20 crc kubenswrapper[4958]: E1206 05:57:20.530501 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1\": container with ID starting with 6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1 not found: ID does not exist" containerID="6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.530533 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1"} err="failed to get container status \"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1\": rpc error: code = NotFound desc = could not find container \"6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1\": container with ID starting with 6776fc1f8523c0ca01af1785fabe69df8cda84add073707e44897f800accfbf1 not found: ID does not exist" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.628832 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle\") pod \"70849ca2-0c84-4cb0-8366-0d24a78af2db\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.628915 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data\") pod \"70849ca2-0c84-4cb0-8366-0d24a78af2db\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.629009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgqgb\" (UniqueName: \"kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb\") pod \"70849ca2-0c84-4cb0-8366-0d24a78af2db\" (UID: \"70849ca2-0c84-4cb0-8366-0d24a78af2db\") " Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.639244 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb" (OuterVolumeSpecName: "kube-api-access-xgqgb") pod "70849ca2-0c84-4cb0-8366-0d24a78af2db" (UID: "70849ca2-0c84-4cb0-8366-0d24a78af2db"). InnerVolumeSpecName "kube-api-access-xgqgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.663594 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70849ca2-0c84-4cb0-8366-0d24a78af2db" (UID: "70849ca2-0c84-4cb0-8366-0d24a78af2db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.663750 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data" (OuterVolumeSpecName: "config-data") pod "70849ca2-0c84-4cb0-8366-0d24a78af2db" (UID: "70849ca2-0c84-4cb0-8366-0d24a78af2db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.732011 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.732050 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70849ca2-0c84-4cb0-8366-0d24a78af2db-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.732063 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgqgb\" (UniqueName: \"kubernetes.io/projected/70849ca2-0c84-4cb0-8366-0d24a78af2db-kube-api-access-xgqgb\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.813090 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.839630 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.861983 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:20 crc kubenswrapper[4958]: E1206 05:57:20.862530 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerName="nova-scheduler-scheduler" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.862553 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerName="nova-scheduler-scheduler" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.862810 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" containerName="nova-scheduler-scheduler" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.863665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.866240 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 05:57:20 crc kubenswrapper[4958]: I1206 05:57:20.883213 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.037390 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrxj4\" (UniqueName: \"kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.037503 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.037770 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.139623 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrxj4\" (UniqueName: \"kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.139670 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.139738 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.144242 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.144690 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.162266 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrxj4\" (UniqueName: \"kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4\") pod \"nova-scheduler-0\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.182202 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.479829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47fce56d-0e48-44d4-a30e-14d412fb727f","Type":"ContainerStarted","Data":"6418313ed686c6396544bb1f73dab7dfe2cdcf8bbb5182ed57bc2cc3940b4be9"} Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.480155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"47fce56d-0e48-44d4-a30e-14d412fb727f","Type":"ContainerStarted","Data":"ec2cef5ee98a5e7b8c19471fe637a2b69441e72b58b9ee6386a8ef33f986aad8"} Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.480172 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.497493 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.4974637250000002 podStartE2EDuration="2.497463725s" podCreationTimestamp="2025-12-06 05:57:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:21.491417461 +0000 UTC m=+1752.025188234" watchObservedRunningTime="2025-12-06 05:57:21.497463725 +0000 UTC m=+1752.031234488" Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.665777 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:21 crc kubenswrapper[4958]: I1206 05:57:21.772885 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70849ca2-0c84-4cb0-8366-0d24a78af2db" path="/var/lib/kubelet/pods/70849ca2-0c84-4cb0-8366-0d24a78af2db/volumes" Dec 06 05:57:22 crc kubenswrapper[4958]: I1206 05:57:22.431778 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 05:57:22 crc kubenswrapper[4958]: I1206 05:57:22.520929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08","Type":"ContainerStarted","Data":"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b"} Dec 06 05:57:22 crc kubenswrapper[4958]: I1206 05:57:22.520960 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08","Type":"ContainerStarted","Data":"908ccb538a867259d64b6e23ae63e463b0a89da79923d2b1bf9fbd5c035eeb39"} Dec 06 05:57:22 crc kubenswrapper[4958]: I1206 05:57:22.555729 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.555705991 podStartE2EDuration="2.555705991s" podCreationTimestamp="2025-12-06 05:57:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:22.548845535 +0000 UTC m=+1753.082616308" watchObservedRunningTime="2025-12-06 05:57:22.555705991 +0000 UTC m=+1753.089476774" Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.129135 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.129603 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" containerName="kube-state-metrics" containerID="cri-o://910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2" gracePeriod=30 Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.183012 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.398908 4958 scope.go:117] "RemoveContainer" containerID="f3f38f3120442d10c02b6ab44ee91bceac0d9d018ef6677cf0840a7ec37954dd" Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.567032 4958 generic.go:334] "Generic (PLEG): container finished" podID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" containerID="910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2" exitCode=2 Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.567105 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"636ba8d3-ffe7-42ba-85eb-0cd2da08036d","Type":"ContainerDied","Data":"910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2"} Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.723429 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.877378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvt45\" (UniqueName: \"kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45\") pod \"636ba8d3-ffe7-42ba-85eb-0cd2da08036d\" (UID: \"636ba8d3-ffe7-42ba-85eb-0cd2da08036d\") " Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.892858 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45" (OuterVolumeSpecName: "kube-api-access-mvt45") pod "636ba8d3-ffe7-42ba-85eb-0cd2da08036d" (UID: "636ba8d3-ffe7-42ba-85eb-0cd2da08036d"). InnerVolumeSpecName "kube-api-access-mvt45". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:26 crc kubenswrapper[4958]: I1206 05:57:26.980129 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvt45\" (UniqueName: \"kubernetes.io/projected/636ba8d3-ffe7-42ba-85eb-0cd2da08036d-kube-api-access-mvt45\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.602968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"636ba8d3-ffe7-42ba-85eb-0cd2da08036d","Type":"ContainerDied","Data":"4a74ee9c5f5b84f5bac2c68a782f4f67f79194d4192462ca2e55b1b90883af13"} Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.603068 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.603111 4958 scope.go:117] "RemoveContainer" containerID="910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.675192 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.688373 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.700746 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:27 crc kubenswrapper[4958]: E1206 05:57:27.701294 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" containerName="kube-state-metrics" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.701313 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" containerName="kube-state-metrics" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.701529 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" containerName="kube-state-metrics" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.702267 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.706423 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.709517 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.712928 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.783573 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="636ba8d3-ffe7-42ba-85eb-0cd2da08036d" path="/var/lib/kubelet/pods/636ba8d3-ffe7-42ba-85eb-0cd2da08036d/volumes" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.904356 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.904567 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.904691 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:27 crc kubenswrapper[4958]: I1206 05:57:27.904744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs2w2\" (UniqueName: \"kubernetes.io/projected/8322abc0-31a3-4770-856d-e23e4d428204-kube-api-access-bs2w2\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.006231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.006282 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs2w2\" (UniqueName: \"kubernetes.io/projected/8322abc0-31a3-4770-856d-e23e4d428204-kube-api-access-bs2w2\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.006347 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.006430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.012418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.013249 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.019699 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/8322abc0-31a3-4770-856d-e23e4d428204-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.032139 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs2w2\" (UniqueName: \"kubernetes.io/projected/8322abc0-31a3-4770-856d-e23e4d428204-kube-api-access-bs2w2\") pod \"kube-state-metrics-0\" (UID: \"8322abc0-31a3-4770-856d-e23e4d428204\") " pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.330860 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.547564 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.548062 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-central-agent" containerID="cri-o://6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8" gracePeriod=30 Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.548176 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="proxy-httpd" containerID="cri-o://3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18" gracePeriod=30 Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.548212 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="sg-core" containerID="cri-o://6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b" gracePeriod=30 Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.548243 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-notification-agent" containerID="cri-o://76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84" gracePeriod=30 Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.891736 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.929858 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:28 crc kubenswrapper[4958]: I1206 05:57:28.930166 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648693 4958 generic.go:334] "Generic (PLEG): container finished" podID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerID="3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18" exitCode=0 Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648728 4958 generic.go:334] "Generic (PLEG): container finished" podID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerID="6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b" exitCode=2 Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648741 4958 generic.go:334] "Generic (PLEG): container finished" podID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerID="6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8" exitCode=0 Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerDied","Data":"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18"} Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648831 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerDied","Data":"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b"} Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.648844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerDied","Data":"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8"} Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.650854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8322abc0-31a3-4770-856d-e23e4d428204","Type":"ContainerStarted","Data":"3d2fa3c21e8a3ddeb79a345e7be6a26c0f44690f042ef8732bd2600244c7ba45"} Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.651171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"8322abc0-31a3-4770-856d-e23e4d428204","Type":"ContainerStarted","Data":"a043814958ab1301aa71975b36e1e8ba75ba251c2e704856b816884ac9a8a455"} Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.651352 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.679886 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.331997327 podStartE2EDuration="2.6798661s" podCreationTimestamp="2025-12-06 05:57:27 +0000 UTC" firstStartedPulling="2025-12-06 05:57:28.890755877 +0000 UTC m=+1759.424526640" lastFinishedPulling="2025-12-06 05:57:29.23862465 +0000 UTC m=+1759.772395413" observedRunningTime="2025-12-06 05:57:29.67029882 +0000 UTC m=+1760.204069583" watchObservedRunningTime="2025-12-06 05:57:29.6798661 +0000 UTC m=+1760.213636853" Dec 06 05:57:29 crc kubenswrapper[4958]: I1206 05:57:29.907015 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 06 05:57:30 crc kubenswrapper[4958]: I1206 05:57:30.012686 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:30 crc kubenswrapper[4958]: I1206 05:57:30.012707 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:30 crc kubenswrapper[4958]: I1206 05:57:30.762418 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:57:30 crc kubenswrapper[4958]: E1206 05:57:30.762688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:57:31 crc kubenswrapper[4958]: I1206 05:57:31.196439 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 05:57:31 crc kubenswrapper[4958]: I1206 05:57:31.232943 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 05:57:31 crc kubenswrapper[4958]: I1206 05:57:31.700134 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.665037 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.673243 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.736924 4958 generic.go:334] "Generic (PLEG): container finished" podID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerID="76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84" exitCode=0 Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.736986 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerDied","Data":"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84"} Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.737017 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"907b754e-d88c-4a77-9cc3-68da290bddc9","Type":"ContainerDied","Data":"215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c"} Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.737037 4958 scope.go:117] "RemoveContainer" containerID="3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.737188 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.744105 4958 generic.go:334] "Generic (PLEG): container finished" podID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerID="da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c" exitCode=137 Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.744175 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.744160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerDied","Data":"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c"} Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.745039 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"e7c5ea63-6267-4f3a-9ce3-70ea211f8645","Type":"ContainerDied","Data":"ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921"} Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.764803 4958 scope.go:117] "RemoveContainer" containerID="6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.781663 4958 scope.go:117] "RemoveContainer" containerID="76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.800679 4958 scope.go:117] "RemoveContainer" containerID="6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.820110 4958 scope.go:117] "RemoveContainer" containerID="3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.820449 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18\": container with ID starting with 3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18 not found: ID does not exist" containerID="3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.820504 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18"} err="failed to get container status \"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18\": rpc error: code = NotFound desc = could not find container \"3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18\": container with ID starting with 3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18 not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.820533 4958 scope.go:117] "RemoveContainer" containerID="6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.820887 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b\": container with ID starting with 6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b not found: ID does not exist" containerID="6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.820931 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b"} err="failed to get container status \"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b\": rpc error: code = NotFound desc = could not find container \"6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b\": container with ID starting with 6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.820959 4958 scope.go:117] "RemoveContainer" containerID="76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.821225 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84\": container with ID starting with 76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84 not found: ID does not exist" containerID="76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.821256 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84"} err="failed to get container status \"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84\": rpc error: code = NotFound desc = could not find container \"76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84\": container with ID starting with 76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84 not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.821272 4958 scope.go:117] "RemoveContainer" containerID="6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.821526 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8\": container with ID starting with 6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8 not found: ID does not exist" containerID="6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.821571 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8"} err="failed to get container status \"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8\": rpc error: code = NotFound desc = could not find container \"6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8\": container with ID starting with 6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8 not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.821592 4958 scope.go:117] "RemoveContainer" containerID="da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832559 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832620 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832694 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5vng\" (UniqueName: \"kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832784 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832832 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle\") pod \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832891 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-92ntn\" (UniqueName: \"kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn\") pod \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.832982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.833035 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs\") pod \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.833743 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data\") pod \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\" (UID: \"e7c5ea63-6267-4f3a-9ce3-70ea211f8645\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.833782 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle\") pod \"907b754e-d88c-4a77-9cc3-68da290bddc9\" (UID: \"907b754e-d88c-4a77-9cc3-68da290bddc9\") " Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.836790 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.838648 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts" (OuterVolumeSpecName: "scripts") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.838885 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs" (OuterVolumeSpecName: "logs") pod "e7c5ea63-6267-4f3a-9ce3-70ea211f8645" (UID: "e7c5ea63-6267-4f3a-9ce3-70ea211f8645"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.840717 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.843069 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn" (OuterVolumeSpecName: "kube-api-access-92ntn") pod "e7c5ea63-6267-4f3a-9ce3-70ea211f8645" (UID: "e7c5ea63-6267-4f3a-9ce3-70ea211f8645"). InnerVolumeSpecName "kube-api-access-92ntn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.844442 4958 scope.go:117] "RemoveContainer" containerID="6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.844820 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng" (OuterVolumeSpecName: "kube-api-access-n5vng") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "kube-api-access-n5vng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.868046 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data" (OuterVolumeSpecName: "config-data") pod "e7c5ea63-6267-4f3a-9ce3-70ea211f8645" (UID: "e7c5ea63-6267-4f3a-9ce3-70ea211f8645"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.869007 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.874126 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e7c5ea63-6267-4f3a-9ce3-70ea211f8645" (UID: "e7c5ea63-6267-4f3a-9ce3-70ea211f8645"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.875582 4958 scope.go:117] "RemoveContainer" containerID="da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.876041 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c\": container with ID starting with da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c not found: ID does not exist" containerID="da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.876077 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c"} err="failed to get container status \"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c\": rpc error: code = NotFound desc = could not find container \"da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c\": container with ID starting with da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.876104 4958 scope.go:117] "RemoveContainer" containerID="6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401" Dec 06 05:57:36 crc kubenswrapper[4958]: E1206 05:57:36.876449 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401\": container with ID starting with 6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401 not found: ID does not exist" containerID="6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.876490 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401"} err="failed to get container status \"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401\": rpc error: code = NotFound desc = could not find container \"6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401\": container with ID starting with 6d1f96f043dac650a1879872dd3d8140f8554ef818b7fd1e52da935db3d09401 not found: ID does not exist" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.911079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936593 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936634 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936650 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-92ntn\" (UniqueName: \"kubernetes.io/projected/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-kube-api-access-92ntn\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936662 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936674 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936685 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e7c5ea63-6267-4f3a-9ce3-70ea211f8645-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936698 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936711 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936722 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/907b754e-d88c-4a77-9cc3-68da290bddc9-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.936733 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5vng\" (UniqueName: \"kubernetes.io/projected/907b754e-d88c-4a77-9cc3-68da290bddc9-kube-api-access-n5vng\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:36 crc kubenswrapper[4958]: I1206 05:57:36.938338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data" (OuterVolumeSpecName: "config-data") pod "907b754e-d88c-4a77-9cc3-68da290bddc9" (UID: "907b754e-d88c-4a77-9cc3-68da290bddc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.038508 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/907b754e-d88c-4a77-9cc3-68da290bddc9-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.080790 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.103050 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.112646 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.126588 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.136797 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137181 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-log" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137197 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-log" Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137208 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-metadata" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137214 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-metadata" Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137232 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="sg-core" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137268 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="sg-core" Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137278 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-notification-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137284 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-notification-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137305 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-central-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137310 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-central-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: E1206 05:57:37.137328 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="proxy-httpd" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137333 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="proxy-httpd" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137522 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-metadata" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137530 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="sg-core" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137539 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-notification-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137553 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="ceilometer-central-agent" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137565 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" containerName="nova-metadata-log" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.137580 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" containerName="proxy-httpd" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.139335 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.142710 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.142792 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.142891 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.149959 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.151585 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.155234 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.155430 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.156235 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.165245 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241165 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241267 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241284 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241510 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241572 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fslbp\" (UniqueName: \"kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241618 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7slw\" (UniqueName: \"kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241675 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241818 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241844 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241874 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.241895 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.343454 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.343702 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.343772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.343833 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.343977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344075 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fslbp\" (UniqueName: \"kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344168 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7slw\" (UniqueName: \"kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344266 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344423 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344635 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344717 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.344965 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.345783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.346265 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.349418 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.349641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.350087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.350183 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.349444 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.361416 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.361710 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.363665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fslbp\" (UniqueName: \"kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp\") pod \"ceilometer-0\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.363955 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.364344 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7slw\" (UniqueName: \"kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw\") pod \"nova-metadata-0\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.556249 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.567286 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.791996 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="907b754e-d88c-4a77-9cc3-68da290bddc9" path="/var/lib/kubelet/pods/907b754e-d88c-4a77-9cc3-68da290bddc9/volumes" Dec 06 05:57:37 crc kubenswrapper[4958]: I1206 05:57:37.793246 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7c5ea63-6267-4f3a-9ce3-70ea211f8645" path="/var/lib/kubelet/pods/e7c5ea63-6267-4f3a-9ce3-70ea211f8645/volumes" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.057933 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.066685 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.131963 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:57:38 crc kubenswrapper[4958]: E1206 05:57:38.329133 4958 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8171df7d_9d9e_4c16_ab55_8a2401a919d2.slice/crio-6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a: Error finding container 6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a: Status 404 returned error can't find the container with id 6ded9dd6dd594d817a5941b1e32bb94e6526962adaa9fb5a8a323714c211fa1a Dec 06 05:57:38 crc kubenswrapper[4958]: E1206 05:57:38.336357 4958 manager.go:1116] Failed to create existing container: /kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70849ca2_0c84_4cb0_8366_0d24a78af2db.slice/crio-199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193: Error finding container 199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193: Status 404 returned error can't find the container with id 199d0faafb9bc8c7bc34c1276a950671fe442f333219e05b0fbfba3a30aa9193 Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.352075 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 06 05:57:38 crc kubenswrapper[4958]: E1206 05:57:38.646689 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70849ca2_0c84_4cb0_8366_0d24a78af2db.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c5ea63_6267_4f3a_9ce3_70ea211f8645.slice/crio-ab7adf4a867ed419ed966e8c90116d6128bf1bca3c0cff2eeed406579150f921\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636ba8d3_ffe7_42ba_85eb_0cd2da08036d.slice/crio-4a74ee9c5f5b84f5bac2c68a782f4f67f79194d4192462ca2e55b1b90883af13\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636ba8d3_ffe7_42ba_85eb_0cd2da08036d.slice/crio-910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2.scope\": RecentStats: unable to find data in memory cache], [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2240dee5_48bc_4ce5_a802_3b3f6c38ef64.slice/crio-conmon-a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c5ea63_6267_4f3a_9ce3_70ea211f8645.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636ba8d3_ffe7_42ba_85eb_0cd2da08036d.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-conmon-76482319fcc556fdf75b8fed768f89e013c5868542b946b15d39d6fd51033a84.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod636ba8d3_ffe7_42ba_85eb_0cd2da08036d.slice/crio-conmon-910ac913a946f9fd571fc99ac615a3001fd13d7cfec7fbc2803e8d7e4fbf42d2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7c5ea63_6267_4f3a_9ce3_70ea211f8645.slice/crio-conmon-da3cf7c39c616ffcb489f1e31ba63aa99b305229c9c2ae6b8914621e9104912c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-215fe18352fc5d85daeeb08d0b3ea5e8d3d6f7439060df0379ed9fb26da3438c\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-conmon-3a83ff7168e8c1a5f38b663104cf2c5abf1a403e68361354fad27c020a6e2b18.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-conmon-6bd2b1aa635382ff29e00e6e90d7471ce87ca0a4c3de524d5bbc5d5b1785bae8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod907b754e_d88c_4a77_9cc3_68da290bddc9.slice/crio-conmon-6f16abefe5365da4b552b7688c81950f078cb52b8d89033e464397d853362e3b.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.732360 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.789654 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerStarted","Data":"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.789751 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerStarted","Data":"2107986b21788d90305b9613a03e1a3c145d967ba713e3f80d7db0c855ada06d"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.794707 4958 generic.go:334] "Generic (PLEG): container finished" podID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" containerID="a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd" exitCode=137 Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.794801 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.794811 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2240dee5-48bc-4ce5-a802-3b3f6c38ef64","Type":"ContainerDied","Data":"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.794919 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"2240dee5-48bc-4ce5-a802-3b3f6c38ef64","Type":"ContainerDied","Data":"2f7b5df1e287bc135e575d87cc211e350a7a463daf42ec7fd2651d1faee63cfb"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.794935 4958 scope.go:117] "RemoveContainer" containerID="a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.796594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerStarted","Data":"2f89376abd585fc29ab74242e3b4854bddb8c4617f780aed7a0cda5c117d7a27"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.796642 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerStarted","Data":"0aa43bf19a86154c65d61c6d99de23ee4fa6c98a070b496a200bce2289763278"} Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.818459 4958 scope.go:117] "RemoveContainer" containerID="a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd" Dec 06 05:57:38 crc kubenswrapper[4958]: E1206 05:57:38.818847 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd\": container with ID starting with a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd not found: ID does not exist" containerID="a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.818897 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd"} err="failed to get container status \"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd\": rpc error: code = NotFound desc = could not find container \"a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd\": container with ID starting with a6b435cfda03e3570c567a290686c52a7009d59aa5dc422c879c49c4c188f9cd not found: ID does not exist" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.873224 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle\") pod \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.873588 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data\") pod \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.873740 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfj27\" (UniqueName: \"kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27\") pod \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\" (UID: \"2240dee5-48bc-4ce5-a802-3b3f6c38ef64\") " Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.883070 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27" (OuterVolumeSpecName: "kube-api-access-bfj27") pod "2240dee5-48bc-4ce5-a802-3b3f6c38ef64" (UID: "2240dee5-48bc-4ce5-a802-3b3f6c38ef64"). InnerVolumeSpecName "kube-api-access-bfj27". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.917638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2240dee5-48bc-4ce5-a802-3b3f6c38ef64" (UID: "2240dee5-48bc-4ce5-a802-3b3f6c38ef64"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.926521 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data" (OuterVolumeSpecName: "config-data") pod "2240dee5-48bc-4ce5-a802-3b3f6c38ef64" (UID: "2240dee5-48bc-4ce5-a802-3b3f6c38ef64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.941411 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.942063 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.943428 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.950027 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.977420 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfj27\" (UniqueName: \"kubernetes.io/projected/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-kube-api-access-bfj27\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.977461 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:38 crc kubenswrapper[4958]: I1206 05:57:38.977494 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2240dee5-48bc-4ce5-a802-3b3f6c38ef64-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.149898 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.165716 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.202751 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:39 crc kubenswrapper[4958]: E1206 05:57:39.203251 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.203270 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.203536 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" containerName="nova-cell1-novncproxy-novncproxy" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.204385 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.209207 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.209736 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.224775 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.231388 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.289557 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8c6bp\" (UniqueName: \"kubernetes.io/projected/fb368363-98b2-4e51-a1a0-077dcb35ccb7-kube-api-access-8c6bp\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.289653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.289697 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.289724 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.289766 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.391923 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8c6bp\" (UniqueName: \"kubernetes.io/projected/fb368363-98b2-4e51-a1a0-077dcb35ccb7-kube-api-access-8c6bp\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.392033 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.392083 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.392113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.392158 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.396966 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.397649 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.397982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.412272 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8c6bp\" (UniqueName: \"kubernetes.io/projected/fb368363-98b2-4e51-a1a0-077dcb35ccb7-kube-api-access-8c6bp\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.412383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb368363-98b2-4e51-a1a0-077dcb35ccb7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"fb368363-98b2-4e51-a1a0-077dcb35ccb7\") " pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.588957 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.781075 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2240dee5-48bc-4ce5-a802-3b3f6c38ef64" path="/var/lib/kubelet/pods/2240dee5-48bc-4ce5-a802-3b3f6c38ef64/volumes" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.855812 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerStarted","Data":"9420d409a502bafe3f8125ebf771f7e0f92d0c2124a0fad09b858548abe99f4e"} Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.865547 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerStarted","Data":"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8"} Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.865856 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.876723 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 05:57:39 crc kubenswrapper[4958]: I1206 05:57:39.881972 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.881951917 podStartE2EDuration="2.881951917s" podCreationTimestamp="2025-12-06 05:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:39.87833548 +0000 UTC m=+1770.412106263" watchObservedRunningTime="2025-12-06 05:57:39.881951917 +0000 UTC m=+1770.415722680" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.045336 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.047652 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.061701 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107044 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107120 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107143 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107223 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107267 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mztnf\" (UniqueName: \"kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.107287 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.135676 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211228 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211561 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211697 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mztnf\" (UniqueName: \"kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.211716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.213748 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.213808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.213895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.219657 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.219831 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.245808 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mztnf\" (UniqueName: \"kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf\") pod \"dnsmasq-dns-54599d8f7-zjb95\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.372295 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.875875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb368363-98b2-4e51-a1a0-077dcb35ccb7","Type":"ContainerStarted","Data":"366832df4a3af936a6cb385a9d9386a2b3a70eb9a0b090c9debc0afc5f2b7bc7"} Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.875927 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"fb368363-98b2-4e51-a1a0-077dcb35ccb7","Type":"ContainerStarted","Data":"fb63c635413ed1918586137a4ba5da07dc6207d2d2c2e4ceba6b7888bef424f0"} Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.878556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerStarted","Data":"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae"} Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.906048 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:57:40 crc kubenswrapper[4958]: I1206 05:57:40.907743 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.9077195009999999 podStartE2EDuration="1.907719501s" podCreationTimestamp="2025-12-06 05:57:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:40.892366134 +0000 UTC m=+1771.426136897" watchObservedRunningTime="2025-12-06 05:57:40.907719501 +0000 UTC m=+1771.441490264" Dec 06 05:57:40 crc kubenswrapper[4958]: W1206 05:57:40.908641 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc90cd5d_e6ef_4216_ae6f_496ff8228ccd.slice/crio-7fbb28149a2781d563bcba5479468d8114bafc5f4a974e17573487596fd21c45 WatchSource:0}: Error finding container 7fbb28149a2781d563bcba5479468d8114bafc5f4a974e17573487596fd21c45: Status 404 returned error can't find the container with id 7fbb28149a2781d563bcba5479468d8114bafc5f4a974e17573487596fd21c45 Dec 06 05:57:41 crc kubenswrapper[4958]: I1206 05:57:41.888608 4958 generic.go:334] "Generic (PLEG): container finished" podID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerID="ff0b9233a4b84cf6c6e9b4f9488e0ae95b004fcc216f188c47e0274ed7dce742" exitCode=0 Dec 06 05:57:41 crc kubenswrapper[4958]: I1206 05:57:41.888662 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" event={"ID":"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd","Type":"ContainerDied","Data":"ff0b9233a4b84cf6c6e9b4f9488e0ae95b004fcc216f188c47e0274ed7dce742"} Dec 06 05:57:41 crc kubenswrapper[4958]: I1206 05:57:41.889256 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" event={"ID":"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd","Type":"ContainerStarted","Data":"7fbb28149a2781d563bcba5479468d8114bafc5f4a974e17573487596fd21c45"} Dec 06 05:57:41 crc kubenswrapper[4958]: I1206 05:57:41.897081 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerStarted","Data":"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791"} Dec 06 05:57:41 crc kubenswrapper[4958]: I1206 05:57:41.964043 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.886684252 podStartE2EDuration="4.964027615s" podCreationTimestamp="2025-12-06 05:57:37 +0000 UTC" firstStartedPulling="2025-12-06 05:57:38.066422263 +0000 UTC m=+1768.600193026" lastFinishedPulling="2025-12-06 05:57:41.143765626 +0000 UTC m=+1771.677536389" observedRunningTime="2025-12-06 05:57:41.961133866 +0000 UTC m=+1772.494904629" watchObservedRunningTime="2025-12-06 05:57:41.964027615 +0000 UTC m=+1772.497798388" Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.568526 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.569817 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.747031 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.762788 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:57:42 crc kubenswrapper[4958]: E1206 05:57:42.763007 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.924063 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" event={"ID":"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd","Type":"ContainerStarted","Data":"668f9c121e25e134ae2b91f0e3093ba2188bc87ce1ff78f4b87f59ff4e0f6993"} Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.924167 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-log" containerID="cri-o://2c60679de295806793caa7a83821e8b7bca6ff46418ee79fd39650f841d2c84c" gracePeriod=30 Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.924220 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-api" containerID="cri-o://fc95aaa5792096a3db35f3d6a8c186cdf56de9d75bcf4fc8a6aabfbdb826f153" gracePeriod=30 Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.925498 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:57:42 crc kubenswrapper[4958]: I1206 05:57:42.952795 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" podStartSLOduration=2.952760763 podStartE2EDuration="2.952760763s" podCreationTimestamp="2025-12-06 05:57:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:42.944361755 +0000 UTC m=+1773.478132528" watchObservedRunningTime="2025-12-06 05:57:42.952760763 +0000 UTC m=+1773.486531526" Dec 06 05:57:43 crc kubenswrapper[4958]: I1206 05:57:43.935365 4958 generic.go:334] "Generic (PLEG): container finished" podID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerID="2c60679de295806793caa7a83821e8b7bca6ff46418ee79fd39650f841d2c84c" exitCode=143 Dec 06 05:57:43 crc kubenswrapper[4958]: I1206 05:57:43.935453 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerDied","Data":"2c60679de295806793caa7a83821e8b7bca6ff46418ee79fd39650f841d2c84c"} Dec 06 05:57:43 crc kubenswrapper[4958]: I1206 05:57:43.935700 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.026069 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.589584 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.950729 4958 generic.go:334] "Generic (PLEG): container finished" podID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerID="fc95aaa5792096a3db35f3d6a8c186cdf56de9d75bcf4fc8a6aabfbdb826f153" exitCode=0 Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.950815 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerDied","Data":"fc95aaa5792096a3db35f3d6a8c186cdf56de9d75bcf4fc8a6aabfbdb826f153"} Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.951056 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-central-agent" containerID="cri-o://031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63" gracePeriod=30 Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.952573 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="proxy-httpd" containerID="cri-o://df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791" gracePeriod=30 Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.952649 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="sg-core" containerID="cri-o://ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae" gracePeriod=30 Dec 06 05:57:44 crc kubenswrapper[4958]: I1206 05:57:44.952702 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-notification-agent" containerID="cri-o://c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8" gracePeriod=30 Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.056445 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.248463 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data\") pod \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.248637 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle\") pod \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.248725 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4chp2\" (UniqueName: \"kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2\") pod \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.248778 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs\") pod \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\" (UID: \"bbefde75-00ba-4fdc-86ca-47090f17e7d6\") " Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.249407 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs" (OuterVolumeSpecName: "logs") pod "bbefde75-00ba-4fdc-86ca-47090f17e7d6" (UID: "bbefde75-00ba-4fdc-86ca-47090f17e7d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.257856 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2" (OuterVolumeSpecName: "kube-api-access-4chp2") pod "bbefde75-00ba-4fdc-86ca-47090f17e7d6" (UID: "bbefde75-00ba-4fdc-86ca-47090f17e7d6"). InnerVolumeSpecName "kube-api-access-4chp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.280765 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bbefde75-00ba-4fdc-86ca-47090f17e7d6" (UID: "bbefde75-00ba-4fdc-86ca-47090f17e7d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.283946 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data" (OuterVolumeSpecName: "config-data") pod "bbefde75-00ba-4fdc-86ca-47090f17e7d6" (UID: "bbefde75-00ba-4fdc-86ca-47090f17e7d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.351993 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.352033 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbefde75-00ba-4fdc-86ca-47090f17e7d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.352048 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4chp2\" (UniqueName: \"kubernetes.io/projected/bbefde75-00ba-4fdc-86ca-47090f17e7d6-kube-api-access-4chp2\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.352061 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bbefde75-00ba-4fdc-86ca-47090f17e7d6-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.962433 4958 generic.go:334] "Generic (PLEG): container finished" podID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerID="df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791" exitCode=0 Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.962500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerDied","Data":"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791"} Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.964615 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"bbefde75-00ba-4fdc-86ca-47090f17e7d6","Type":"ContainerDied","Data":"25be768155bbab0bd017f46f20107613bb788693f982abe2cbb4d51151b11abb"} Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.964663 4958 scope.go:117] "RemoveContainer" containerID="fc95aaa5792096a3db35f3d6a8c186cdf56de9d75bcf4fc8a6aabfbdb826f153" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.964795 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.994056 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:45 crc kubenswrapper[4958]: I1206 05:57:45.996361 4958 scope.go:117] "RemoveContainer" containerID="2c60679de295806793caa7a83821e8b7bca6ff46418ee79fd39650f841d2c84c" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.020440 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.032727 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:46 crc kubenswrapper[4958]: E1206 05:57:46.033190 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-api" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.033205 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-api" Dec 06 05:57:46 crc kubenswrapper[4958]: E1206 05:57:46.033225 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-log" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.033232 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-log" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.033422 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-api" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.033453 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" containerName="nova-api-log" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.034598 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.037432 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.037813 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.042811 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.045793 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172524 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172655 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172687 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xdfk\" (UniqueName: \"kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172771 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172855 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.172882 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274261 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274317 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274388 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274519 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274543 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xdfk\" (UniqueName: \"kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.274618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.275956 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.278984 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.282406 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.283340 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.292011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xdfk\" (UniqueName: \"kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.292273 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs\") pod \"nova-api-0\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.366514 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.889821 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.977055 4958 generic.go:334] "Generic (PLEG): container finished" podID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerID="ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae" exitCode=2 Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.977114 4958 generic.go:334] "Generic (PLEG): container finished" podID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerID="c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8" exitCode=0 Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.977181 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerDied","Data":"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae"} Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.977209 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerDied","Data":"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8"} Dec 06 05:57:46 crc kubenswrapper[4958]: I1206 05:57:46.981723 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerStarted","Data":"fec5c21ed61cee1d8be6f258b3ccc9af3b61efe6ffcd7de86d25c644a5963e5e"} Dec 06 05:57:47 crc kubenswrapper[4958]: I1206 05:57:47.568153 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 05:57:47 crc kubenswrapper[4958]: I1206 05:57:47.568512 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 05:57:48 crc kubenswrapper[4958]: I1206 05:57:47.776613 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbefde75-00ba-4fdc-86ca-47090f17e7d6" path="/var/lib/kubelet/pods/bbefde75-00ba-4fdc-86ca-47090f17e7d6/volumes" Dec 06 05:57:48 crc kubenswrapper[4958]: I1206 05:57:47.997082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerStarted","Data":"3bdd764d7553f62b7dae5b95fe4d0c127ceece0cfb3a8646a6792416910c90e2"} Dec 06 05:57:48 crc kubenswrapper[4958]: I1206 05:57:48.586702 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:48 crc kubenswrapper[4958]: I1206 05:57:48.586750 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:48 crc kubenswrapper[4958]: E1206 05:57:48.912412 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 06 05:57:49 crc kubenswrapper[4958]: I1206 05:57:49.006309 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerStarted","Data":"dbbcd1ed1fede27104b9df6f4a253ded06ee3079e6cf5b4edec69c583dc7a442"} Dec 06 05:57:49 crc kubenswrapper[4958]: I1206 05:57:49.023876 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.023859906 podStartE2EDuration="4.023859906s" podCreationTimestamp="2025-12-06 05:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:49.021972084 +0000 UTC m=+1779.555742847" watchObservedRunningTime="2025-12-06 05:57:49.023859906 +0000 UTC m=+1779.557630659" Dec 06 05:57:49 crc kubenswrapper[4958]: I1206 05:57:49.589205 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:49 crc kubenswrapper[4958]: I1206 05:57:49.643027 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.037842 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.206701 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fdxd4"] Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.209108 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.211145 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.211280 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.217578 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fdxd4"] Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.351862 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.351930 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7mx9\" (UniqueName: \"kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.351998 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.352094 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.374143 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.453783 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.453934 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.453977 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.454011 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7mx9\" (UniqueName: \"kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.465200 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.466253 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.470843 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.471134 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="dnsmasq-dns" containerID="cri-o://febec00c597f9534f573d3e92881a43531eb3fcef933c048620082cb07a3a5fc" gracePeriod=10 Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.479245 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.480588 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7mx9\" (UniqueName: \"kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9\") pod \"nova-cell1-cell-mapping-fdxd4\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:50 crc kubenswrapper[4958]: I1206 05:57:50.524369 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.024890 4958 generic.go:334] "Generic (PLEG): container finished" podID="7216f9bf-c51a-419f-860d-4be494b44376" containerID="febec00c597f9534f573d3e92881a43531eb3fcef933c048620082cb07a3a5fc" exitCode=0 Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.024932 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" event={"ID":"7216f9bf-c51a-419f-860d-4be494b44376","Type":"ContainerDied","Data":"febec00c597f9534f573d3e92881a43531eb3fcef933c048620082cb07a3a5fc"} Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.059913 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fdxd4"] Dec 06 05:57:51 crc kubenswrapper[4958]: W1206 05:57:51.064016 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod379f6203_0ef8_471d_ba4a_d5ce940c313b.slice/crio-7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4 WatchSource:0}: Error finding container 7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4: Status 404 returned error can't find the container with id 7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4 Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.728071 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895108 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895217 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7jqm\" (UniqueName: \"kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895393 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.895505 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.900858 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm" (OuterVolumeSpecName: "kube-api-access-c7jqm") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "kube-api-access-c7jqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.917067 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.959514 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.980523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997449 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config" (OuterVolumeSpecName: "config") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997627 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997689 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997743 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997820 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fslbp\" (UniqueName: \"kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997860 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997902 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.997922 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998013 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") pod \"7216f9bf-c51a-419f-860d-4be494b44376\" (UID: \"7216f9bf-c51a-419f-860d-4be494b44376\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998033 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd\") pod \"a16e4574-65dd-4a63-8513-29e1ffdbc503\" (UID: \"a16e4574-65dd-4a63-8513-29e1ffdbc503\") " Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998395 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7jqm\" (UniqueName: \"kubernetes.io/projected/7216f9bf-c51a-419f-860d-4be494b44376-kube-api-access-c7jqm\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998407 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998415 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:51 crc kubenswrapper[4958]: I1206 05:57:51.998683 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.000145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts" (OuterVolumeSpecName: "scripts") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: W1206 05:57:52.000340 4958 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/7216f9bf-c51a-419f-860d-4be494b44376/volumes/kubernetes.io~configmap/config Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.000412 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config" (OuterVolumeSpecName: "config") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.000474 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.001251 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.001794 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7216f9bf-c51a-419f-860d-4be494b44376" (UID: "7216f9bf-c51a-419f-860d-4be494b44376"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.007105 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp" (OuterVolumeSpecName: "kube-api-access-fslbp") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "kube-api-access-fslbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.025595 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.039725 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fdxd4" event={"ID":"379f6203-0ef8-471d-ba4a-d5ce940c313b","Type":"ContainerStarted","Data":"b10fce3af1ec85dcbfb986a57988eee4168485600cd3036d6031c29e56a6c190"} Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.039783 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fdxd4" event={"ID":"379f6203-0ef8-471d-ba4a-d5ce940c313b","Type":"ContainerStarted","Data":"7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4"} Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.042983 4958 generic.go:334] "Generic (PLEG): container finished" podID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerID="031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63" exitCode=0 Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.043046 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerDied","Data":"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63"} Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.043072 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a16e4574-65dd-4a63-8513-29e1ffdbc503","Type":"ContainerDied","Data":"2107986b21788d90305b9613a03e1a3c145d967ba713e3f80d7db0c855ada06d"} Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.043087 4958 scope.go:117] "RemoveContainer" containerID="df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.043191 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.045441 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" event={"ID":"7216f9bf-c51a-419f-860d-4be494b44376","Type":"ContainerDied","Data":"faecef8e9409116e0f6667e90aed757c95a9660ac9d6dfb3d2a5b373a0590eb7"} Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.045520 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-844fc57f6f-vfb6w" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.063266 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fdxd4" podStartSLOduration=2.063238827 podStartE2EDuration="2.063238827s" podCreationTimestamp="2025-12-06 05:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:57:52.053943195 +0000 UTC m=+1782.587713958" watchObservedRunningTime="2025-12-06 05:57:52.063238827 +0000 UTC m=+1782.597009590" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.075198 4958 scope.go:117] "RemoveContainer" containerID="ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.083985 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.097799 4958 scope.go:117] "RemoveContainer" containerID="c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100431 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100453 4958 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100462 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100474 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fslbp\" (UniqueName: \"kubernetes.io/projected/a16e4574-65dd-4a63-8513-29e1ffdbc503-kube-api-access-fslbp\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100497 4958 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100506 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100514 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100552 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7216f9bf-c51a-419f-860d-4be494b44376-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.100561 4958 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a16e4574-65dd-4a63-8513-29e1ffdbc503-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.106365 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.111810 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.119544 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-844fc57f6f-vfb6w"] Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.119732 4958 scope.go:117] "RemoveContainer" containerID="031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.137916 4958 scope.go:117] "RemoveContainer" containerID="df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.138285 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791\": container with ID starting with df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791 not found: ID does not exist" containerID="df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.138332 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791"} err="failed to get container status \"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791\": rpc error: code = NotFound desc = could not find container \"df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791\": container with ID starting with df0f7c0c5eb8ed780ee0eb764d48159ba7df323a0b514984a1bf26506a372791 not found: ID does not exist" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.138361 4958 scope.go:117] "RemoveContainer" containerID="ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.138718 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae\": container with ID starting with ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae not found: ID does not exist" containerID="ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.138761 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae"} err="failed to get container status \"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae\": rpc error: code = NotFound desc = could not find container \"ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae\": container with ID starting with ce2c45e1592621625457e014609969d280af33a783c6e10c2eb98249d61de2ae not found: ID does not exist" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.138787 4958 scope.go:117] "RemoveContainer" containerID="c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.139086 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8\": container with ID starting with c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8 not found: ID does not exist" containerID="c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.139116 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8"} err="failed to get container status \"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8\": rpc error: code = NotFound desc = could not find container \"c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8\": container with ID starting with c863d2c2972d35d79def7fa09c466dd3cb11d00942570d01c084d554429577d8 not found: ID does not exist" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.139137 4958 scope.go:117] "RemoveContainer" containerID="031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.139334 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63\": container with ID starting with 031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63 not found: ID does not exist" containerID="031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.139358 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63"} err="failed to get container status \"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63\": rpc error: code = NotFound desc = could not find container \"031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63\": container with ID starting with 031f125043a6db07731852923c6923619594222870f299729368ad56c7ba3f63 not found: ID does not exist" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.139372 4958 scope.go:117] "RemoveContainer" containerID="febec00c597f9534f573d3e92881a43531eb3fcef933c048620082cb07a3a5fc" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.150957 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data" (OuterVolumeSpecName: "config-data") pod "a16e4574-65dd-4a63-8513-29e1ffdbc503" (UID: "a16e4574-65dd-4a63-8513-29e1ffdbc503"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.155770 4958 scope.go:117] "RemoveContainer" containerID="ab4ecf7e585c878cb41d8444f64c5c14d89c58d842bff4238153abdfdbb78394" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.202855 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.202884 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a16e4574-65dd-4a63-8513-29e1ffdbc503-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.380365 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.388987 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.406498 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.407198 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="sg-core" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.407301 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="sg-core" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.407402 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="proxy-httpd" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.407571 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="proxy-httpd" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.407655 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-notification-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.407719 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-notification-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.407796 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-central-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.407871 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-central-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.407955 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="dnsmasq-dns" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408028 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="dnsmasq-dns" Dec 06 05:57:52 crc kubenswrapper[4958]: E1206 05:57:52.408097 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="init" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408169 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="init" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408516 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7216f9bf-c51a-419f-860d-4be494b44376" containerName="dnsmasq-dns" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408620 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-notification-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408690 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="ceilometer-central-agent" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408773 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="proxy-httpd" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.408864 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" containerName="sg-core" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.411086 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.414527 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.419759 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.421024 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.426175 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509054 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wztnv\" (UniqueName: \"kubernetes.io/projected/428c09d2-3c2a-4562-9295-3cf3da179f40-kube-api-access-wztnv\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509092 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-run-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509112 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509138 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-scripts\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509335 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509376 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-log-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.509394 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-config-data\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-scripts\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611389 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611608 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-log-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611656 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-config-data\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611800 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wztnv\" (UniqueName: \"kubernetes.io/projected/428c09d2-3c2a-4562-9295-3cf3da179f40-kube-api-access-wztnv\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611843 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-run-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.611946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.612054 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-log-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.612420 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/428c09d2-3c2a-4562-9295-3cf3da179f40-run-httpd\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.615757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.615785 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.616437 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-config-data\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.616680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.616895 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/428c09d2-3c2a-4562-9295-3cf3da179f40-scripts\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.629629 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wztnv\" (UniqueName: \"kubernetes.io/projected/428c09d2-3c2a-4562-9295-3cf3da179f40-kube-api-access-wztnv\") pod \"ceilometer-0\" (UID: \"428c09d2-3c2a-4562-9295-3cf3da179f40\") " pod="openstack/ceilometer-0" Dec 06 05:57:52 crc kubenswrapper[4958]: I1206 05:57:52.738804 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 06 05:57:53 crc kubenswrapper[4958]: I1206 05:57:53.170024 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 06 05:57:53 crc kubenswrapper[4958]: W1206 05:57:53.173575 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod428c09d2_3c2a_4562_9295_3cf3da179f40.slice/crio-3d36e3d8b17c7321f4bd297bef061ebc003250e9c8248c39a6eb4380a322c889 WatchSource:0}: Error finding container 3d36e3d8b17c7321f4bd297bef061ebc003250e9c8248c39a6eb4380a322c889: Status 404 returned error can't find the container with id 3d36e3d8b17c7321f4bd297bef061ebc003250e9c8248c39a6eb4380a322c889 Dec 06 05:57:53 crc kubenswrapper[4958]: I1206 05:57:53.788434 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7216f9bf-c51a-419f-860d-4be494b44376" path="/var/lib/kubelet/pods/7216f9bf-c51a-419f-860d-4be494b44376/volumes" Dec 06 05:57:53 crc kubenswrapper[4958]: I1206 05:57:53.789407 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a16e4574-65dd-4a63-8513-29e1ffdbc503" path="/var/lib/kubelet/pods/a16e4574-65dd-4a63-8513-29e1ffdbc503/volumes" Dec 06 05:57:54 crc kubenswrapper[4958]: I1206 05:57:54.073683 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"3d36e3d8b17c7321f4bd297bef061ebc003250e9c8248c39a6eb4380a322c889"} Dec 06 05:57:55 crc kubenswrapper[4958]: I1206 05:57:55.085592 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"1998659182b691acde6a9b03c06c875ccf927d44d0bd8145e06aff139abcf6c2"} Dec 06 05:57:56 crc kubenswrapper[4958]: I1206 05:57:56.099327 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"ace4aacd221efc4a62a7e9d512c8d0c33b74c12423a38fe5127a7a82d220a54c"} Dec 06 05:57:56 crc kubenswrapper[4958]: I1206 05:57:56.367064 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:56 crc kubenswrapper[4958]: I1206 05:57:56.367350 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.111138 4958 generic.go:334] "Generic (PLEG): container finished" podID="379f6203-0ef8-471d-ba4a-d5ce940c313b" containerID="b10fce3af1ec85dcbfb986a57988eee4168485600cd3036d6031c29e56a6c190" exitCode=0 Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.111216 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fdxd4" event={"ID":"379f6203-0ef8-471d-ba4a-d5ce940c313b","Type":"ContainerDied","Data":"b10fce3af1ec85dcbfb986a57988eee4168485600cd3036d6031c29e56a6c190"} Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.383706 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.383898 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.223:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.575133 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.577614 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.588057 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 05:57:57 crc kubenswrapper[4958]: I1206 05:57:57.762345 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:57:57 crc kubenswrapper[4958]: E1206 05:57:57.762674 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.126076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"3f6a3903595925da9eb0a9368187bcd53589273af41f2c8e1380cf19225fff60"} Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.138336 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.561455 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.636974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts\") pod \"379f6203-0ef8-471d-ba4a-d5ce940c313b\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.637174 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7mx9\" (UniqueName: \"kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9\") pod \"379f6203-0ef8-471d-ba4a-d5ce940c313b\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.637251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data\") pod \"379f6203-0ef8-471d-ba4a-d5ce940c313b\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.637283 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle\") pod \"379f6203-0ef8-471d-ba4a-d5ce940c313b\" (UID: \"379f6203-0ef8-471d-ba4a-d5ce940c313b\") " Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.650194 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts" (OuterVolumeSpecName: "scripts") pod "379f6203-0ef8-471d-ba4a-d5ce940c313b" (UID: "379f6203-0ef8-471d-ba4a-d5ce940c313b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.653715 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9" (OuterVolumeSpecName: "kube-api-access-p7mx9") pod "379f6203-0ef8-471d-ba4a-d5ce940c313b" (UID: "379f6203-0ef8-471d-ba4a-d5ce940c313b"). InnerVolumeSpecName "kube-api-access-p7mx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.676635 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data" (OuterVolumeSpecName: "config-data") pod "379f6203-0ef8-471d-ba4a-d5ce940c313b" (UID: "379f6203-0ef8-471d-ba4a-d5ce940c313b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.680502 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "379f6203-0ef8-471d-ba4a-d5ce940c313b" (UID: "379f6203-0ef8-471d-ba4a-d5ce940c313b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.741280 4958 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-scripts\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.741333 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p7mx9\" (UniqueName: \"kubernetes.io/projected/379f6203-0ef8-471d-ba4a-d5ce940c313b-kube-api-access-p7mx9\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.741349 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:58 crc kubenswrapper[4958]: I1206 05:57:58.741363 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/379f6203-0ef8-471d-ba4a-d5ce940c313b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.138257 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fdxd4" Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.138538 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fdxd4" event={"ID":"379f6203-0ef8-471d-ba4a-d5ce940c313b","Type":"ContainerDied","Data":"7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4"} Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.139725 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7879dacf03f5a7b41b6b43dd8da55b0eeb8f9ed63846410443a95d330f18adf4" Dec 06 05:57:59 crc kubenswrapper[4958]: E1206 05:57:59.197721 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.349545 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.349889 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-log" containerID="cri-o://3bdd764d7553f62b7dae5b95fe4d0c127ceece0cfb3a8646a6792416910c90e2" gracePeriod=30 Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.350523 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-api" containerID="cri-o://dbbcd1ed1fede27104b9df6f4a253ded06ee3079e6cf5b4edec69c583dc7a442" gracePeriod=30 Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.362536 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.362796 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" containerName="nova-scheduler-scheduler" containerID="cri-o://034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b" gracePeriod=30 Dec 06 05:57:59 crc kubenswrapper[4958]: I1206 05:57:59.371227 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.150276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"6d62d920a8e9d399ae0e4c0112b3060b3bf7f3a7c60709be17ce6ded6e58f85f"} Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.152001 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.156165 4958 generic.go:334] "Generic (PLEG): container finished" podID="8ee658c8-2a59-4501-972e-9be75427e277" containerID="3bdd764d7553f62b7dae5b95fe4d0c127ceece0cfb3a8646a6792416910c90e2" exitCode=143 Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.156304 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerDied","Data":"3bdd764d7553f62b7dae5b95fe4d0c127ceece0cfb3a8646a6792416910c90e2"} Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.188762 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.525014748 podStartE2EDuration="8.188740716s" podCreationTimestamp="2025-12-06 05:57:52 +0000 UTC" firstStartedPulling="2025-12-06 05:57:53.176435336 +0000 UTC m=+1783.710206099" lastFinishedPulling="2025-12-06 05:57:59.840161304 +0000 UTC m=+1790.373932067" observedRunningTime="2025-12-06 05:58:00.181431347 +0000 UTC m=+1790.715202110" watchObservedRunningTime="2025-12-06 05:58:00.188740716 +0000 UTC m=+1790.722511469" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.610671 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.689932 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data\") pod \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.689977 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrxj4\" (UniqueName: \"kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4\") pod \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.690009 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle\") pod \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\" (UID: \"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08\") " Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.694730 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4" (OuterVolumeSpecName: "kube-api-access-rrxj4") pod "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" (UID: "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08"). InnerVolumeSpecName "kube-api-access-rrxj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.716828 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" (UID: "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.721623 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data" (OuterVolumeSpecName: "config-data") pod "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" (UID: "197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.793489 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.793522 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrxj4\" (UniqueName: \"kubernetes.io/projected/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-kube-api-access-rrxj4\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:00 crc kubenswrapper[4958]: I1206 05:58:00.793534 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167138 4958 generic.go:334] "Generic (PLEG): container finished" podID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" containerID="034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b" exitCode=0 Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167181 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167230 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08","Type":"ContainerDied","Data":"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b"} Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167264 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08","Type":"ContainerDied","Data":"908ccb538a867259d64b6e23ae63e463b0a89da79923d2b1bf9fbd5c035eeb39"} Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167280 4958 scope.go:117] "RemoveContainer" containerID="034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167738 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" containerID="cri-o://2f89376abd585fc29ab74242e3b4854bddb8c4617f780aed7a0cda5c117d7a27" gracePeriod=30 Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.167833 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" containerID="cri-o://9420d409a502bafe3f8125ebf771f7e0f92d0c2124a0fad09b858548abe99f4e" gracePeriod=30 Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.200630 4958 scope.go:117] "RemoveContainer" containerID="034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b" Dec 06 05:58:01 crc kubenswrapper[4958]: E1206 05:58:01.200960 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b\": container with ID starting with 034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b not found: ID does not exist" containerID="034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.200988 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b"} err="failed to get container status \"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b\": rpc error: code = NotFound desc = could not find container \"034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b\": container with ID starting with 034579a02670791085fe7a3c9603bfcb0e8dcbaa4af0b10ba565f241546ba23b not found: ID does not exist" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.213919 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.235923 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.244740 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:58:01 crc kubenswrapper[4958]: E1206 05:58:01.245219 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="379f6203-0ef8-471d-ba4a-d5ce940c313b" containerName="nova-manage" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.245241 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="379f6203-0ef8-471d-ba4a-d5ce940c313b" containerName="nova-manage" Dec 06 05:58:01 crc kubenswrapper[4958]: E1206 05:58:01.245289 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" containerName="nova-scheduler-scheduler" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.245299 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" containerName="nova-scheduler-scheduler" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.245549 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" containerName="nova-scheduler-scheduler" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.245577 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="379f6203-0ef8-471d-ba4a-d5ce940c313b" containerName="nova-manage" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.246297 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.251347 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.254140 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.303042 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.303345 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-config-data\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.303512 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8ngw\" (UniqueName: \"kubernetes.io/projected/61ffc4d3-131a-4ef1-a712-936e7c609cbc-kube-api-access-f8ngw\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.406134 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.406207 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-config-data\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.406253 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8ngw\" (UniqueName: \"kubernetes.io/projected/61ffc4d3-131a-4ef1-a712-936e7c609cbc-kube-api-access-f8ngw\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.410880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-config-data\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.411026 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61ffc4d3-131a-4ef1-a712-936e7c609cbc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.424648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8ngw\" (UniqueName: \"kubernetes.io/projected/61ffc4d3-131a-4ef1-a712-936e7c609cbc-kube-api-access-f8ngw\") pod \"nova-scheduler-0\" (UID: \"61ffc4d3-131a-4ef1-a712-936e7c609cbc\") " pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.567714 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 06 05:58:01 crc kubenswrapper[4958]: I1206 05:58:01.777176 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08" path="/var/lib/kubelet/pods/197d0f3c-73b7-4a7f-a0a1-6bfda5d09c08/volumes" Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.070365 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.180997 4958 generic.go:334] "Generic (PLEG): container finished" podID="8ee658c8-2a59-4501-972e-9be75427e277" containerID="dbbcd1ed1fede27104b9df6f4a253ded06ee3079e6cf5b4edec69c583dc7a442" exitCode=0 Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.181089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerDied","Data":"dbbcd1ed1fede27104b9df6f4a253ded06ee3079e6cf5b4edec69c583dc7a442"} Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.185447 4958 generic.go:334] "Generic (PLEG): container finished" podID="a9b53759-43fe-4c25-8787-2dce20353746" containerID="2f89376abd585fc29ab74242e3b4854bddb8c4617f780aed7a0cda5c117d7a27" exitCode=143 Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.185508 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerDied","Data":"2f89376abd585fc29ab74242e3b4854bddb8c4617f780aed7a0cda5c117d7a27"} Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.186845 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61ffc4d3-131a-4ef1-a712-936e7c609cbc","Type":"ContainerStarted","Data":"142e8648d2482d9bccfee2a86547e63624d74d25d9e7b188f471c93e66e830f9"} Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.569232 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": dial tcp 10.217.0.220:8775: connect: connection refused" Dec 06 05:58:02 crc kubenswrapper[4958]: I1206 05:58:02.569247 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.220:8775/\": dial tcp 10.217.0.220:8775: connect: connection refused" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.105823 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.198123 4958 generic.go:334] "Generic (PLEG): container finished" podID="a9b53759-43fe-4c25-8787-2dce20353746" containerID="9420d409a502bafe3f8125ebf771f7e0f92d0c2124a0fad09b858548abe99f4e" exitCode=0 Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.198282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerDied","Data":"9420d409a502bafe3f8125ebf771f7e0f92d0c2124a0fad09b858548abe99f4e"} Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.200275 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61ffc4d3-131a-4ef1-a712-936e7c609cbc","Type":"ContainerStarted","Data":"b5dbfa14f759e16051b6f5f694fb54935e6d15f8bb88f68731f654aafb2a97c7"} Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.202338 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8ee658c8-2a59-4501-972e-9be75427e277","Type":"ContainerDied","Data":"fec5c21ed61cee1d8be6f258b3ccc9af3b61efe6ffcd7de86d25c644a5963e5e"} Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.202386 4958 scope.go:117] "RemoveContainer" containerID="dbbcd1ed1fede27104b9df6f4a253ded06ee3079e6cf5b4edec69c583dc7a442" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.202543 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.226795 4958 scope.go:117] "RemoveContainer" containerID="3bdd764d7553f62b7dae5b95fe4d0c127ceece0cfb3a8646a6792416910c90e2" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.255686 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.255806 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.255867 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xdfk\" (UniqueName: \"kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.255970 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.256029 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.256079 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs\") pod \"8ee658c8-2a59-4501-972e-9be75427e277\" (UID: \"8ee658c8-2a59-4501-972e-9be75427e277\") " Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.257362 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs" (OuterVolumeSpecName: "logs") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.261687 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk" (OuterVolumeSpecName: "kube-api-access-6xdfk") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "kube-api-access-6xdfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.284715 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.286456 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data" (OuterVolumeSpecName: "config-data") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.322238 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.322310 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8ee658c8-2a59-4501-972e-9be75427e277" (UID: "8ee658c8-2a59-4501-972e-9be75427e277"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358733 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358805 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xdfk\" (UniqueName: \"kubernetes.io/projected/8ee658c8-2a59-4501-972e-9be75427e277-kube-api-access-6xdfk\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358827 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358846 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8ee658c8-2a59-4501-972e-9be75427e277-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358862 4958 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.358879 4958 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ee658c8-2a59-4501-972e-9be75427e277-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.561074 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.581331 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.594908 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 06 05:58:03 crc kubenswrapper[4958]: E1206 05:58:03.595440 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-api" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.595468 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-api" Dec 06 05:58:03 crc kubenswrapper[4958]: E1206 05:58:03.595498 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-log" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.595508 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-log" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.595769 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-api" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.595814 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ee658c8-2a59-4501-972e-9be75427e277" containerName="nova-api-log" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.597321 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.602002 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.602313 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.602665 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.624572 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664355 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664493 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-config-data\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664545 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664599 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph7wq\" (UniqueName: \"kubernetes.io/projected/f867f367-20bd-467e-b102-512f96506fa3-kube-api-access-ph7wq\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.664647 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f867f367-20bd-467e-b102-512f96506fa3-logs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766498 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766527 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-config-data\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766558 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph7wq\" (UniqueName: \"kubernetes.io/projected/f867f367-20bd-467e-b102-512f96506fa3-kube-api-access-ph7wq\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766621 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f867f367-20bd-467e-b102-512f96506fa3-logs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.766994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f867f367-20bd-467e-b102-512f96506fa3-logs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.771321 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-public-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.772162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-config-data\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.773172 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-internal-tls-certs\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.775412 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ee658c8-2a59-4501-972e-9be75427e277" path="/var/lib/kubelet/pods/8ee658c8-2a59-4501-972e-9be75427e277/volumes" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.783938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f867f367-20bd-467e-b102-512f96506fa3-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.785281 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph7wq\" (UniqueName: \"kubernetes.io/projected/f867f367-20bd-467e-b102-512f96506fa3-kube-api-access-ph7wq\") pod \"nova-api-0\" (UID: \"f867f367-20bd-467e-b102-512f96506fa3\") " pod="openstack/nova-api-0" Dec 06 05:58:03 crc kubenswrapper[4958]: I1206 05:58:03.921963 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.038444 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.174749 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle\") pod \"a9b53759-43fe-4c25-8787-2dce20353746\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.174816 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs\") pod \"a9b53759-43fe-4c25-8787-2dce20353746\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.174850 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7slw\" (UniqueName: \"kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw\") pod \"a9b53759-43fe-4c25-8787-2dce20353746\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.175096 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data\") pod \"a9b53759-43fe-4c25-8787-2dce20353746\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.175141 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs\") pod \"a9b53759-43fe-4c25-8787-2dce20353746\" (UID: \"a9b53759-43fe-4c25-8787-2dce20353746\") " Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.177088 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs" (OuterVolumeSpecName: "logs") pod "a9b53759-43fe-4c25-8787-2dce20353746" (UID: "a9b53759-43fe-4c25-8787-2dce20353746"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.181266 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw" (OuterVolumeSpecName: "kube-api-access-c7slw") pod "a9b53759-43fe-4c25-8787-2dce20353746" (UID: "a9b53759-43fe-4c25-8787-2dce20353746"). InnerVolumeSpecName "kube-api-access-c7slw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.206586 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9b53759-43fe-4c25-8787-2dce20353746" (UID: "a9b53759-43fe-4c25-8787-2dce20353746"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.221784 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.222018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"a9b53759-43fe-4c25-8787-2dce20353746","Type":"ContainerDied","Data":"0aa43bf19a86154c65d61c6d99de23ee4fa6c98a070b496a200bce2289763278"} Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.222060 4958 scope.go:117] "RemoveContainer" containerID="9420d409a502bafe3f8125ebf771f7e0f92d0c2124a0fad09b858548abe99f4e" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.251771 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.251752753 podStartE2EDuration="3.251752753s" podCreationTimestamp="2025-12-06 05:58:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:58:04.250250831 +0000 UTC m=+1794.784021594" watchObservedRunningTime="2025-12-06 05:58:04.251752753 +0000 UTC m=+1794.785523516" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.276223 4958 scope.go:117] "RemoveContainer" containerID="2f89376abd585fc29ab74242e3b4854bddb8c4617f780aed7a0cda5c117d7a27" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.277932 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.277961 4958 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a9b53759-43fe-4c25-8787-2dce20353746-logs\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.277994 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7slw\" (UniqueName: \"kubernetes.io/projected/a9b53759-43fe-4c25-8787-2dce20353746-kube-api-access-c7slw\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.279671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data" (OuterVolumeSpecName: "config-data") pod "a9b53759-43fe-4c25-8787-2dce20353746" (UID: "a9b53759-43fe-4c25-8787-2dce20353746"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.289411 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "a9b53759-43fe-4c25-8787-2dce20353746" (UID: "a9b53759-43fe-4c25-8787-2dce20353746"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.380336 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.380371 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/a9b53759-43fe-4c25-8787-2dce20353746-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.392590 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.559763 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.573833 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.584980 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:04 crc kubenswrapper[4958]: E1206 05:58:04.585379 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.585456 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" Dec 06 05:58:04 crc kubenswrapper[4958]: E1206 05:58:04.585495 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.585503 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.585729 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-metadata" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.585765 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b53759-43fe-4c25-8787-2dce20353746" containerName="nova-metadata-log" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.586875 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.593189 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.594389 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.610686 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.688174 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-config-data\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.688233 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.688274 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.688299 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qk52\" (UniqueName: \"kubernetes.io/projected/630ea894-bfed-4bda-b5b1-260f314e2f22-kube-api-access-9qk52\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.688340 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630ea894-bfed-4bda-b5b1-260f314e2f22-logs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.790534 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.790584 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qk52\" (UniqueName: \"kubernetes.io/projected/630ea894-bfed-4bda-b5b1-260f314e2f22-kube-api-access-9qk52\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.790628 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630ea894-bfed-4bda-b5b1-260f314e2f22-logs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.790724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-config-data\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.790756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.791757 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/630ea894-bfed-4bda-b5b1-260f314e2f22-logs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.794602 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-config-data\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.797952 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.803731 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/630ea894-bfed-4bda-b5b1-260f314e2f22-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.817614 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qk52\" (UniqueName: \"kubernetes.io/projected/630ea894-bfed-4bda-b5b1-260f314e2f22-kube-api-access-9qk52\") pod \"nova-metadata-0\" (UID: \"630ea894-bfed-4bda-b5b1-260f314e2f22\") " pod="openstack/nova-metadata-0" Dec 06 05:58:04 crc kubenswrapper[4958]: I1206 05:58:04.924980 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.237445 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f867f367-20bd-467e-b102-512f96506fa3","Type":"ContainerStarted","Data":"f18185c58b923fef8a8020bc9c4105d7d2abc4845f2abaa84574af2b6f1c8eaa"} Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.237835 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f867f367-20bd-467e-b102-512f96506fa3","Type":"ContainerStarted","Data":"5184548fb938311e45d37ea271b8c3018ec4b162f23a3ac38a46de97ff8bb30c"} Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.237851 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"f867f367-20bd-467e-b102-512f96506fa3","Type":"ContainerStarted","Data":"1fa63a4ea79aca00980ab9d725f16ef60072bd5496a8a04327b4a38bc9b18825"} Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.269385 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.269364635 podStartE2EDuration="2.269364635s" podCreationTimestamp="2025-12-06 05:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:58:05.2684322 +0000 UTC m=+1795.802202973" watchObservedRunningTime="2025-12-06 05:58:05.269364635 +0000 UTC m=+1795.803135398" Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.452691 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 06 05:58:05 crc kubenswrapper[4958]: W1206 05:58:05.463620 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod630ea894_bfed_4bda_b5b1_260f314e2f22.slice/crio-3e5a7e0867d11f2576936083142b2dd60b8c4916e63f8eb439f61646eb0298ac WatchSource:0}: Error finding container 3e5a7e0867d11f2576936083142b2dd60b8c4916e63f8eb439f61646eb0298ac: Status 404 returned error can't find the container with id 3e5a7e0867d11f2576936083142b2dd60b8c4916e63f8eb439f61646eb0298ac Dec 06 05:58:05 crc kubenswrapper[4958]: I1206 05:58:05.773764 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b53759-43fe-4c25-8787-2dce20353746" path="/var/lib/kubelet/pods/a9b53759-43fe-4c25-8787-2dce20353746/volumes" Dec 06 05:58:06 crc kubenswrapper[4958]: I1206 05:58:06.247122 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"630ea894-bfed-4bda-b5b1-260f314e2f22","Type":"ContainerStarted","Data":"8f37064599ad377e13ee7eff6a8bbc38e746e376cc9f74826b5148050a9184a9"} Dec 06 05:58:06 crc kubenswrapper[4958]: I1206 05:58:06.247163 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"630ea894-bfed-4bda-b5b1-260f314e2f22","Type":"ContainerStarted","Data":"d39624ab373f72f2144a6598e25e174a66b543b515960db7b625ba7f281fb806"} Dec 06 05:58:06 crc kubenswrapper[4958]: I1206 05:58:06.247173 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"630ea894-bfed-4bda-b5b1-260f314e2f22","Type":"ContainerStarted","Data":"3e5a7e0867d11f2576936083142b2dd60b8c4916e63f8eb439f61646eb0298ac"} Dec 06 05:58:06 crc kubenswrapper[4958]: I1206 05:58:06.271493 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.271448335 podStartE2EDuration="2.271448335s" podCreationTimestamp="2025-12-06 05:58:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:58:06.263927611 +0000 UTC m=+1796.797698414" watchObservedRunningTime="2025-12-06 05:58:06.271448335 +0000 UTC m=+1796.805219088" Dec 06 05:58:06 crc kubenswrapper[4958]: I1206 05:58:06.568987 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 06 05:58:09 crc kubenswrapper[4958]: E1206 05:58:09.465226 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Dec 06 05:58:09 crc kubenswrapper[4958]: I1206 05:58:09.925758 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:58:09 crc kubenswrapper[4958]: I1206 05:58:09.925805 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 06 05:58:11 crc kubenswrapper[4958]: I1206 05:58:11.569384 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 06 05:58:11 crc kubenswrapper[4958]: I1206 05:58:11.625395 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 06 05:58:11 crc kubenswrapper[4958]: I1206 05:58:11.762211 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:58:11 crc kubenswrapper[4958]: E1206 05:58:11.762517 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:58:12 crc kubenswrapper[4958]: I1206 05:58:12.361181 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 06 05:58:13 crc kubenswrapper[4958]: I1206 05:58:13.923389 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:58:13 crc kubenswrapper[4958]: I1206 05:58:13.923434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 06 05:58:14 crc kubenswrapper[4958]: I1206 05:58:14.926046 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 05:58:14 crc kubenswrapper[4958]: I1206 05:58:14.926366 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 06 05:58:14 crc kubenswrapper[4958]: I1206 05:58:14.935732 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f867f367-20bd-467e-b102-512f96506fa3" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:58:14 crc kubenswrapper[4958]: I1206 05:58:14.935874 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="f867f367-20bd-467e-b102-512f96506fa3" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.227:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:58:15 crc kubenswrapper[4958]: I1206 05:58:15.938628 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="630ea894-bfed-4bda-b5b1-260f314e2f22" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:58:15 crc kubenswrapper[4958]: I1206 05:58:15.938638 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="630ea894-bfed-4bda-b5b1-260f314e2f22" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.228:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 05:58:22 crc kubenswrapper[4958]: I1206 05:58:22.746413 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 06 05:58:23 crc kubenswrapper[4958]: I1206 05:58:23.931893 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 05:58:23 crc kubenswrapper[4958]: I1206 05:58:23.932619 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 05:58:23 crc kubenswrapper[4958]: I1206 05:58:23.934771 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 06 05:58:23 crc kubenswrapper[4958]: I1206 05:58:23.941012 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 05:58:24 crc kubenswrapper[4958]: I1206 05:58:24.429951 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 06 05:58:24 crc kubenswrapper[4958]: I1206 05:58:24.439361 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 06 05:58:24 crc kubenswrapper[4958]: I1206 05:58:24.932497 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 05:58:24 crc kubenswrapper[4958]: I1206 05:58:24.937014 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 06 05:58:24 crc kubenswrapper[4958]: I1206 05:58:24.938084 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 05:58:25 crc kubenswrapper[4958]: I1206 05:58:25.464493 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 06 05:58:26 crc kubenswrapper[4958]: I1206 05:58:26.761583 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:58:26 crc kubenswrapper[4958]: E1206 05:58:26.762067 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:58:34 crc kubenswrapper[4958]: I1206 05:58:34.063501 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:35 crc kubenswrapper[4958]: I1206 05:58:35.017487 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:37 crc kubenswrapper[4958]: I1206 05:58:37.389813 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="rabbitmq" containerID="cri-o://02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39" gracePeriod=604797 Dec 06 05:58:38 crc kubenswrapper[4958]: I1206 05:58:38.272226 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="rabbitmq" containerID="cri-o://eed58feb9444d5d335af8a921fb42eb856339014a384f9d00fe57a71633b9423" gracePeriod=604797 Dec 06 05:58:40 crc kubenswrapper[4958]: E1206 05:58:40.248450 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bc06a17_7bdb_4ee9_bad3_7996be041e54.slice/crio-conmon-02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39.scope\": RecentStats: unable to find data in memory cache]" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.604854 4958 generic.go:334] "Generic (PLEG): container finished" podID="3141e77c-a73b-400b-b607-21be8537cca4" containerID="eed58feb9444d5d335af8a921fb42eb856339014a384f9d00fe57a71633b9423" exitCode=0 Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.604906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerDied","Data":"eed58feb9444d5d335af8a921fb42eb856339014a384f9d00fe57a71633b9423"} Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.609221 4958 generic.go:334] "Generic (PLEG): container finished" podID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerID="02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39" exitCode=0 Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.609262 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerDied","Data":"02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39"} Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.761441 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:58:40 crc kubenswrapper[4958]: E1206 05:58:40.761784 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.807241 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.866899 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.867919 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868024 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868101 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868179 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868454 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868639 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868723 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.868904 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxxfp\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp\") pod \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\" (UID: \"4bc06a17-7bdb-4ee9-bad3-7996be041e54\") " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.873020 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.873905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.874696 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.875622 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.876201 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.881949 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp" (OuterVolumeSpecName: "kube-api-access-mxxfp") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "kube-api-access-mxxfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.896659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info" (OuterVolumeSpecName: "pod-info") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.896659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.959599 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data" (OuterVolumeSpecName: "config-data") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.972704 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.972964 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.972973 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/4bc06a17-7bdb-4ee9-bad3-7996be041e54-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.972982 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxxfp\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-kube-api-access-mxxfp\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.972992 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.973000 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.973008 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/4bc06a17-7bdb-4ee9-bad3-7996be041e54-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.973036 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Dec 06 05:58:40 crc kubenswrapper[4958]: I1206 05:58:40.973045 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.024270 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.026491 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf" (OuterVolumeSpecName: "server-conf") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.076382 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/4bc06a17-7bdb-4ee9-bad3-7996be041e54-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.076421 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.097111 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "4bc06a17-7bdb-4ee9-bad3-7996be041e54" (UID: "4bc06a17-7bdb-4ee9-bad3-7996be041e54"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.160109 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177629 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177741 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177771 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177844 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177879 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177947 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.177999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178039 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178073 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178094 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178137 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b94v\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v\") pod \"3141e77c-a73b-400b-b607-21be8537cca4\" (UID: \"3141e77c-a73b-400b-b607-21be8537cca4\") " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178782 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/4bc06a17-7bdb-4ee9-bad3-7996be041e54-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178824 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.178863 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.179747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.182572 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.183180 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info" (OuterVolumeSpecName: "pod-info") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.183330 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.183516 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.184014 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v" (OuterVolumeSpecName: "kube-api-access-2b94v") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "kube-api-access-2b94v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.225064 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data" (OuterVolumeSpecName: "config-data") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280109 4958 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3141e77c-a73b-400b-b607-21be8537cca4-pod-info\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280143 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280157 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280167 4958 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280176 4958 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3141e77c-a73b-400b-b607-21be8537cca4-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280185 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280213 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.280222 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b94v\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-kube-api-access-2b94v\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.291228 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf" (OuterVolumeSpecName: "server-conf") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.305267 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.308239 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "3141e77c-a73b-400b-b607-21be8537cca4" (UID: "3141e77c-a73b-400b-b607-21be8537cca4"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.383423 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.383484 4958 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3141e77c-a73b-400b-b607-21be8537cca4-server-conf\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.383495 4958 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3141e77c-a73b-400b-b607-21be8537cca4-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.634046 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.634047 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"3141e77c-a73b-400b-b607-21be8537cca4","Type":"ContainerDied","Data":"0769ff1780c2d2933a7fd285c9ba748d3a32329376a6f34692df6bdcd911b316"} Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.634252 4958 scope.go:117] "RemoveContainer" containerID="eed58feb9444d5d335af8a921fb42eb856339014a384f9d00fe57a71633b9423" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.637114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"4bc06a17-7bdb-4ee9-bad3-7996be041e54","Type":"ContainerDied","Data":"dfda01df8c1a79e729ec331515447f2c5c29bcc8de1be89743c760b8d9a3d6d8"} Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.637213 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.663981 4958 scope.go:117] "RemoveContainer" containerID="5447173ef5fee4c34e757e67557ea57f1af67957b37f37d702d14e1373aa852f" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.687441 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.702294 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.713676 4958 scope.go:117] "RemoveContainer" containerID="02d3bd14fd454c12548fe7f6dc98e73b5e98f6c9092dc7e0a40fa03541503e39" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.713809 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.727271 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.737376 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: E1206 05:58:41.737891 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="setup-container" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.737909 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="setup-container" Dec 06 05:58:41 crc kubenswrapper[4958]: E1206 05:58:41.737939 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.737945 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: E1206 05:58:41.737965 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.737971 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: E1206 05:58:41.737986 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="setup-container" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.737992 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="setup-container" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.738190 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3141e77c-a73b-400b-b607-21be8537cca4" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.738220 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" containerName="rabbitmq" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.739292 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.744890 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.744917 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.745111 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.745228 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.745388 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.745230 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.748663 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mtf4d" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.752403 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.754337 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760134 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760255 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760163 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760489 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-b5ghc" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760579 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760652 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.760880 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.786070 4958 scope.go:117] "RemoveContainer" containerID="76adae9e6bc33af24601db302a5fbb54e7eca5cc92840c07b0d0761eed07ff77" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.790589 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3141e77c-a73b-400b-b607-21be8537cca4" path="/var/lib/kubelet/pods/3141e77c-a73b-400b-b607-21be8537cca4/volumes" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.791421 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bc06a17-7bdb-4ee9-bad3-7996be041e54" path="/var/lib/kubelet/pods/4bc06a17-7bdb-4ee9-bad3-7996be041e54/volumes" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794019 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794083 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794107 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794151 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794213 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shvwc\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-kube-api-access-shvwc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794248 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794273 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794298 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794324 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794385 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.794413 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.796939 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.811769 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.896419 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.896778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdr5n\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-kube-api-access-bdr5n\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.896955 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.896994 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d93a05-0621-49f6-ba81-ffc7b948ba5c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897057 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897088 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d93a05-0621-49f6-ba81-ffc7b948ba5c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897179 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897289 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897332 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897404 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shvwc\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-kube-api-access-shvwc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897502 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897576 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897600 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897638 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897742 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897899 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.897959 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898026 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898056 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898089 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898822 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-config-data\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.898815 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.899791 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-server-conf\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.901947 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.902158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-pod-info\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.902250 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.920294 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.929665 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shvwc\" (UniqueName: \"kubernetes.io/projected/9f0f5c93-d108-48ad-b3fd-c54d25ce982c-kube-api-access-shvwc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.940662 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"9f0f5c93-d108-48ad-b3fd-c54d25ce982c\") " pod="openstack/rabbitmq-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999687 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999719 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999799 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdr5n\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-kube-api-access-bdr5n\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d93a05-0621-49f6-ba81-ffc7b948ba5c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999905 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d93a05-0621-49f6-ba81-ffc7b948ba5c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:41 crc kubenswrapper[4958]: I1206 05:58:41.999923 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.000730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.000987 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.001044 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.001201 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.001429 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.001568 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.001674 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.002880 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/84d93a05-0621-49f6-ba81-ffc7b948ba5c-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.004161 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.004633 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/84d93a05-0621-49f6-ba81-ffc7b948ba5c-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.008023 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.016411 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/84d93a05-0621-49f6-ba81-ffc7b948ba5c-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.020624 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdr5n\" (UniqueName: \"kubernetes.io/projected/84d93a05-0621-49f6-ba81-ffc7b948ba5c-kube-api-access-bdr5n\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.040982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"84d93a05-0621-49f6-ba81-ffc7b948ba5c\") " pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.094006 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.113665 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.715323 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 06 05:58:42 crc kubenswrapper[4958]: I1206 05:58:42.879557 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 06 05:58:42 crc kubenswrapper[4958]: W1206 05:58:42.888948 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod84d93a05_0621_49f6_ba81_ffc7b948ba5c.slice/crio-95fcea9371fa2f49bfab9a68ae37500cce8ea2ac708fc93c4d914ab3157e43d5 WatchSource:0}: Error finding container 95fcea9371fa2f49bfab9a68ae37500cce8ea2ac708fc93c4d914ab3157e43d5: Status 404 returned error can't find the container with id 95fcea9371fa2f49bfab9a68ae37500cce8ea2ac708fc93c4d914ab3157e43d5 Dec 06 05:58:43 crc kubenswrapper[4958]: I1206 05:58:43.660974 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f0f5c93-d108-48ad-b3fd-c54d25ce982c","Type":"ContainerStarted","Data":"e519dbcc8f82f489d066c333a1d8599b1e7c07671217a97f5047ff9a6ae5d13c"} Dec 06 05:58:43 crc kubenswrapper[4958]: I1206 05:58:43.662104 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"84d93a05-0621-49f6-ba81-ffc7b948ba5c","Type":"ContainerStarted","Data":"95fcea9371fa2f49bfab9a68ae37500cce8ea2ac708fc93c4d914ab3157e43d5"} Dec 06 05:58:44 crc kubenswrapper[4958]: I1206 05:58:44.671882 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f0f5c93-d108-48ad-b3fd-c54d25ce982c","Type":"ContainerStarted","Data":"4550ddb6f77d3cf45848ed8408be8e13ea07772a37644d5ff06aefadf7e1ced3"} Dec 06 05:58:44 crc kubenswrapper[4958]: I1206 05:58:44.673629 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"84d93a05-0621-49f6-ba81-ffc7b948ba5c","Type":"ContainerStarted","Data":"1227f07e685bd6da7e55e380603f81c3df8f885240d6401ef6dd3fe53078ebda"} Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.691381 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.694268 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.697222 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.711741 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789523 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789602 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789688 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fndkz\" (UniqueName: \"kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.789933 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.790012 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.891957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892020 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892059 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892119 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892146 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892189 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fndkz\" (UniqueName: \"kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.892217 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.894076 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.894767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.895393 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.895950 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.896099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.896315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:49 crc kubenswrapper[4958]: I1206 05:58:49.924209 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fndkz\" (UniqueName: \"kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz\") pod \"dnsmasq-dns-bf6c7df67-l2kh6\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:50 crc kubenswrapper[4958]: I1206 05:58:50.021623 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:50 crc kubenswrapper[4958]: I1206 05:58:50.540813 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:58:50 crc kubenswrapper[4958]: I1206 05:58:50.748382 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" event={"ID":"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee","Type":"ContainerStarted","Data":"0efe253dc697adea875b70569a0dd0909331cb373ec5db6a087409cd68f21863"} Dec 06 05:58:51 crc kubenswrapper[4958]: I1206 05:58:51.761488 4958 generic.go:334] "Generic (PLEG): container finished" podID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerID="e7ab7e16eb6554e74e5182baeee6f2770355003f299dc72ec0f59eb6e2229772" exitCode=0 Dec 06 05:58:51 crc kubenswrapper[4958]: I1206 05:58:51.776246 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" event={"ID":"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee","Type":"ContainerDied","Data":"e7ab7e16eb6554e74e5182baeee6f2770355003f299dc72ec0f59eb6e2229772"} Dec 06 05:58:52 crc kubenswrapper[4958]: I1206 05:58:52.776085 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" event={"ID":"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee","Type":"ContainerStarted","Data":"4826e9aad1f7545e240db2cbece4dbee114109110418d0a8f2f7a50fd64450d1"} Dec 06 05:58:52 crc kubenswrapper[4958]: I1206 05:58:52.776223 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:58:52 crc kubenswrapper[4958]: I1206 05:58:52.822942 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" podStartSLOduration=3.822923887 podStartE2EDuration="3.822923887s" podCreationTimestamp="2025-12-06 05:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:58:52.819735829 +0000 UTC m=+1843.353506592" watchObservedRunningTime="2025-12-06 05:58:52.822923887 +0000 UTC m=+1843.356694650" Dec 06 05:58:54 crc kubenswrapper[4958]: I1206 05:58:54.762315 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:58:54 crc kubenswrapper[4958]: E1206 05:58:54.762983 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.023659 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.090070 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.091123 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="dnsmasq-dns" containerID="cri-o://668f9c121e25e134ae2b91f0e3093ba2188bc87ce1ff78f4b87f59ff4e0f6993" gracePeriod=10 Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.264708 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-77b58f4b85-5jtpq"] Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.266666 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.292046 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b58f4b85-5jtpq"] Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.373574 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.222:5353: connect: connection refused" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393370 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-sb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393550 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-nb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393603 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvp64\" (UniqueName: \"kubernetes.io/projected/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-kube-api-access-qvp64\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393728 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-swift-storage-0\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393844 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-svc\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.393960 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-config\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.394144 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495374 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-sb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495444 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-nb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495486 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvp64\" (UniqueName: \"kubernetes.io/projected/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-kube-api-access-qvp64\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495550 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-swift-storage-0\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-svc\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495650 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-config\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.495725 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.496744 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-openstack-edpm-ipam\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.497525 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-svc\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.497676 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-sb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.497784 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-config\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.497857 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-dns-swift-storage-0\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.497901 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-ovsdbserver-nb\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.521522 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvp64\" (UniqueName: \"kubernetes.io/projected/9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e-kube-api-access-qvp64\") pod \"dnsmasq-dns-77b58f4b85-5jtpq\" (UID: \"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e\") " pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.599935 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.874980 4958 generic.go:334] "Generic (PLEG): container finished" podID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerID="668f9c121e25e134ae2b91f0e3093ba2188bc87ce1ff78f4b87f59ff4e0f6993" exitCode=0 Dec 06 05:59:00 crc kubenswrapper[4958]: I1206 05:59:00.875064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" event={"ID":"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd","Type":"ContainerDied","Data":"668f9c121e25e134ae2b91f0e3093ba2188bc87ce1ff78f4b87f59ff4e0f6993"} Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.278562 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-77b58f4b85-5jtpq"] Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.455267 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533286 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mztnf\" (UniqueName: \"kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533379 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533422 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533439 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533458 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.533547 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc\") pod \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\" (UID: \"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd\") " Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.549915 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf" (OuterVolumeSpecName: "kube-api-access-mztnf") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "kube-api-access-mztnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.635805 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mztnf\" (UniqueName: \"kubernetes.io/projected/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-kube-api-access-mztnf\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.643248 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.658845 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.667545 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config" (OuterVolumeSpecName: "config") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.668715 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.670249 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" (UID: "bc90cd5d-e6ef-4216-ae6f-496ff8228ccd"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.738077 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.738122 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.738135 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.738146 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.738161 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.896627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" event={"ID":"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e","Type":"ContainerStarted","Data":"5243ee1a37dfd7972a3dc0f986bbbe0970f992ac00b4278c1fc55ba95f21cdfc"} Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.896681 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" event={"ID":"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e","Type":"ContainerStarted","Data":"d358c719428ed014965b9d5f9f606c44ded8e95ad916bdb16e5b60bb304afae2"} Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.898417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" event={"ID":"bc90cd5d-e6ef-4216-ae6f-496ff8228ccd","Type":"ContainerDied","Data":"7fbb28149a2781d563bcba5479468d8114bafc5f4a974e17573487596fd21c45"} Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.898453 4958 scope.go:117] "RemoveContainer" containerID="668f9c121e25e134ae2b91f0e3093ba2188bc87ce1ff78f4b87f59ff4e0f6993" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.898502 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54599d8f7-zjb95" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.923799 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.934555 4958 scope.go:117] "RemoveContainer" containerID="ff0b9233a4b84cf6c6e9b4f9488e0ae95b004fcc216f188c47e0274ed7dce742" Dec 06 05:59:01 crc kubenswrapper[4958]: I1206 05:59:01.935643 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54599d8f7-zjb95"] Dec 06 05:59:02 crc kubenswrapper[4958]: I1206 05:59:02.911603 4958 generic.go:334] "Generic (PLEG): container finished" podID="9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e" containerID="5243ee1a37dfd7972a3dc0f986bbbe0970f992ac00b4278c1fc55ba95f21cdfc" exitCode=0 Dec 06 05:59:02 crc kubenswrapper[4958]: I1206 05:59:02.911677 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" event={"ID":"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e","Type":"ContainerDied","Data":"5243ee1a37dfd7972a3dc0f986bbbe0970f992ac00b4278c1fc55ba95f21cdfc"} Dec 06 05:59:03 crc kubenswrapper[4958]: I1206 05:59:03.774170 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" path="/var/lib/kubelet/pods/bc90cd5d-e6ef-4216-ae6f-496ff8228ccd/volumes" Dec 06 05:59:03 crc kubenswrapper[4958]: I1206 05:59:03.924747 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" event={"ID":"9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e","Type":"ContainerStarted","Data":"19b0c55941b8eab9ad4a45a40abcf81e9b838a6425f014d590cb09338cda3a00"} Dec 06 05:59:03 crc kubenswrapper[4958]: I1206 05:59:03.925020 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:03 crc kubenswrapper[4958]: I1206 05:59:03.949962 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" podStartSLOduration=3.949943057 podStartE2EDuration="3.949943057s" podCreationTimestamp="2025-12-06 05:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:59:03.946316398 +0000 UTC m=+1854.480087181" watchObservedRunningTime="2025-12-06 05:59:03.949943057 +0000 UTC m=+1854.483713820" Dec 06 05:59:07 crc kubenswrapper[4958]: I1206 05:59:07.762313 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:59:07 crc kubenswrapper[4958]: E1206 05:59:07.763133 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:59:10 crc kubenswrapper[4958]: I1206 05:59:10.603304 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-77b58f4b85-5jtpq" Dec 06 05:59:10 crc kubenswrapper[4958]: I1206 05:59:10.679562 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:59:10 crc kubenswrapper[4958]: I1206 05:59:10.679825 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="dnsmasq-dns" containerID="cri-o://4826e9aad1f7545e240db2cbece4dbee114109110418d0a8f2f7a50fd64450d1" gracePeriod=10 Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.003824 4958 generic.go:334] "Generic (PLEG): container finished" podID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerID="4826e9aad1f7545e240db2cbece4dbee114109110418d0a8f2f7a50fd64450d1" exitCode=0 Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.003885 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" event={"ID":"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee","Type":"ContainerDied","Data":"4826e9aad1f7545e240db2cbece4dbee114109110418d0a8f2f7a50fd64450d1"} Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.302722 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.457891 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.458317 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.458565 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.459112 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fndkz\" (UniqueName: \"kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.459633 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.459731 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.459855 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0\") pod \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\" (UID: \"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee\") " Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.463454 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz" (OuterVolumeSpecName: "kube-api-access-fndkz") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "kube-api-access-fndkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.515412 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.524079 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.527437 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.540839 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.541941 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config" (OuterVolumeSpecName: "config") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.554615 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" (UID: "c0d06a16-58e8-4aa3-8e7d-fd4c527078ee"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.562605 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563287 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fndkz\" (UniqueName: \"kubernetes.io/projected/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-kube-api-access-fndkz\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563321 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563335 4958 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563347 4958 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563358 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-config\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:12 crc kubenswrapper[4958]: I1206 05:59:12.563369 4958 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.016070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" event={"ID":"c0d06a16-58e8-4aa3-8e7d-fd4c527078ee","Type":"ContainerDied","Data":"0efe253dc697adea875b70569a0dd0909331cb373ec5db6a087409cd68f21863"} Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.016130 4958 scope.go:117] "RemoveContainer" containerID="4826e9aad1f7545e240db2cbece4dbee114109110418d0a8f2f7a50fd64450d1" Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.016132 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bf6c7df67-l2kh6" Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.046343 4958 scope.go:117] "RemoveContainer" containerID="e7ab7e16eb6554e74e5182baeee6f2770355003f299dc72ec0f59eb6e2229772" Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.056889 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.069193 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bf6c7df67-l2kh6"] Dec 06 05:59:13 crc kubenswrapper[4958]: I1206 05:59:13.775892 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" path="/var/lib/kubelet/pods/c0d06a16-58e8-4aa3-8e7d-fd4c527078ee/volumes" Dec 06 05:59:21 crc kubenswrapper[4958]: I1206 05:59:21.126540 4958 generic.go:334] "Generic (PLEG): container finished" podID="84d93a05-0621-49f6-ba81-ffc7b948ba5c" containerID="1227f07e685bd6da7e55e380603f81c3df8f885240d6401ef6dd3fe53078ebda" exitCode=0 Dec 06 05:59:21 crc kubenswrapper[4958]: I1206 05:59:21.126627 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"84d93a05-0621-49f6-ba81-ffc7b948ba5c","Type":"ContainerDied","Data":"1227f07e685bd6da7e55e380603f81c3df8f885240d6401ef6dd3fe53078ebda"} Dec 06 05:59:21 crc kubenswrapper[4958]: I1206 05:59:21.128858 4958 generic.go:334] "Generic (PLEG): container finished" podID="9f0f5c93-d108-48ad-b3fd-c54d25ce982c" containerID="4550ddb6f77d3cf45848ed8408be8e13ea07772a37644d5ff06aefadf7e1ced3" exitCode=0 Dec 06 05:59:21 crc kubenswrapper[4958]: I1206 05:59:21.128890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f0f5c93-d108-48ad-b3fd-c54d25ce982c","Type":"ContainerDied","Data":"4550ddb6f77d3cf45848ed8408be8e13ea07772a37644d5ff06aefadf7e1ced3"} Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.138981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"9f0f5c93-d108-48ad-b3fd-c54d25ce982c","Type":"ContainerStarted","Data":"2d02f49cbf3a90ec96122793169314ccfeab2b32c4313ee3e3a71e002d614552"} Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.139742 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.141571 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"84d93a05-0621-49f6-ba81-ffc7b948ba5c","Type":"ContainerStarted","Data":"f7fa2f3b5b3b87fef4852ebbff5fbbf0b494f68996b66c2767f27a40ba377e17"} Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.141767 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.171454 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=41.17143238 podStartE2EDuration="41.17143238s" podCreationTimestamp="2025-12-06 05:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:59:22.164777489 +0000 UTC m=+1872.698548272" watchObservedRunningTime="2025-12-06 05:59:22.17143238 +0000 UTC m=+1872.705203143" Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.199628 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=41.199605696 podStartE2EDuration="41.199605696s" podCreationTimestamp="2025-12-06 05:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 05:59:22.193515641 +0000 UTC m=+1872.727286404" watchObservedRunningTime="2025-12-06 05:59:22.199605696 +0000 UTC m=+1872.733376459" Dec 06 05:59:22 crc kubenswrapper[4958]: I1206 05:59:22.762572 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:59:22 crc kubenswrapper[4958]: E1206 05:59:22.763194 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:59:27 crc kubenswrapper[4958]: I1206 05:59:27.114418 4958 scope.go:117] "RemoveContainer" containerID="9cc46fb4cf6241ce5e4a1aba64180f40f862c74409f3f3db65256519557e593a" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.151655 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb"] Dec 06 05:59:29 crc kubenswrapper[4958]: E1206 05:59:29.152515 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="init" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152534 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="init" Dec 06 05:59:29 crc kubenswrapper[4958]: E1206 05:59:29.152551 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="init" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152559 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="init" Dec 06 05:59:29 crc kubenswrapper[4958]: E1206 05:59:29.152579 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152587 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: E1206 05:59:29.152629 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152637 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152899 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc90cd5d-e6ef-4216-ae6f-496ff8228ccd" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.152934 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0d06a16-58e8-4aa3-8e7d-fd4c527078ee" containerName="dnsmasq-dns" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.153791 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.157479 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.157710 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.157817 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.158888 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.163880 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb"] Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.281978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.282100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh7nl\" (UniqueName: \"kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.282149 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.282499 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.384192 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh7nl\" (UniqueName: \"kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.384259 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.384287 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.384443 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.391145 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.391529 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.400289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.402704 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh7nl\" (UniqueName: \"kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:29 crc kubenswrapper[4958]: I1206 05:59:29.481340 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:30 crc kubenswrapper[4958]: I1206 05:59:30.112368 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb"] Dec 06 05:59:30 crc kubenswrapper[4958]: I1206 05:59:30.212973 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" event={"ID":"2b5992cb-ba4f-45da-bdfa-6ca4914a032f","Type":"ContainerStarted","Data":"f57310b0d2bbfae9cdc6e79d6cb6e608e8c13a12ac024253b60b3a7ca3cf4764"} Dec 06 05:59:32 crc kubenswrapper[4958]: I1206 05:59:32.097816 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="9f0f5c93-d108-48ad-b3fd-c54d25ce982c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.229:5671: connect: connection refused" Dec 06 05:59:32 crc kubenswrapper[4958]: I1206 05:59:32.116384 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="84d93a05-0621-49f6-ba81-ffc7b948ba5c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.230:5671: connect: connection refused" Dec 06 05:59:33 crc kubenswrapper[4958]: I1206 05:59:33.763443 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:59:33 crc kubenswrapper[4958]: E1206 05:59:33.764309 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:59:34 crc kubenswrapper[4958]: I1206 05:59:34.246784 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-pvdb2" podUID="f44e552e-a8cb-4abf-bb5c-cfbde43b518b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 05:59:42 crc kubenswrapper[4958]: I1206 05:59:42.097565 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 06 05:59:42 crc kubenswrapper[4958]: I1206 05:59:42.115695 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 06 05:59:43 crc kubenswrapper[4958]: I1206 05:59:43.347110 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" event={"ID":"2b5992cb-ba4f-45da-bdfa-6ca4914a032f","Type":"ContainerStarted","Data":"b95a84b8962aaa792d3ec4c25b7fc9282e4bbc27ec58434cbe35436ece62185a"} Dec 06 05:59:43 crc kubenswrapper[4958]: I1206 05:59:43.376276 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" podStartSLOduration=2.16768376 podStartE2EDuration="14.376253401s" podCreationTimestamp="2025-12-06 05:59:29 +0000 UTC" firstStartedPulling="2025-12-06 05:59:30.111571422 +0000 UTC m=+1880.645342185" lastFinishedPulling="2025-12-06 05:59:42.320141063 +0000 UTC m=+1892.853911826" observedRunningTime="2025-12-06 05:59:43.363518735 +0000 UTC m=+1893.897289508" watchObservedRunningTime="2025-12-06 05:59:43.376253401 +0000 UTC m=+1893.910024164" Dec 06 05:59:48 crc kubenswrapper[4958]: I1206 05:59:48.762576 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 05:59:48 crc kubenswrapper[4958]: E1206 05:59:48.763429 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 05:59:55 crc kubenswrapper[4958]: I1206 05:59:55.480081 4958 generic.go:334] "Generic (PLEG): container finished" podID="2b5992cb-ba4f-45da-bdfa-6ca4914a032f" containerID="b95a84b8962aaa792d3ec4c25b7fc9282e4bbc27ec58434cbe35436ece62185a" exitCode=0 Dec 06 05:59:55 crc kubenswrapper[4958]: I1206 05:59:55.480171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" event={"ID":"2b5992cb-ba4f-45da-bdfa-6ca4914a032f","Type":"ContainerDied","Data":"b95a84b8962aaa792d3ec4c25b7fc9282e4bbc27ec58434cbe35436ece62185a"} Dec 06 05:59:56 crc kubenswrapper[4958]: I1206 05:59:56.939418 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.071104 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle\") pod \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.071263 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh7nl\" (UniqueName: \"kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl\") pod \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.071444 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key\") pod \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.071496 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory\") pod \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\" (UID: \"2b5992cb-ba4f-45da-bdfa-6ca4914a032f\") " Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.077034 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl" (OuterVolumeSpecName: "kube-api-access-lh7nl") pod "2b5992cb-ba4f-45da-bdfa-6ca4914a032f" (UID: "2b5992cb-ba4f-45da-bdfa-6ca4914a032f"). InnerVolumeSpecName "kube-api-access-lh7nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.077039 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "2b5992cb-ba4f-45da-bdfa-6ca4914a032f" (UID: "2b5992cb-ba4f-45da-bdfa-6ca4914a032f"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.100118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory" (OuterVolumeSpecName: "inventory") pod "2b5992cb-ba4f-45da-bdfa-6ca4914a032f" (UID: "2b5992cb-ba4f-45da-bdfa-6ca4914a032f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.101033 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b5992cb-ba4f-45da-bdfa-6ca4914a032f" (UID: "2b5992cb-ba4f-45da-bdfa-6ca4914a032f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.174105 4958 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.174144 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh7nl\" (UniqueName: \"kubernetes.io/projected/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-kube-api-access-lh7nl\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.174154 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.174166 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b5992cb-ba4f-45da-bdfa-6ca4914a032f-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.502339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" event={"ID":"2b5992cb-ba4f-45da-bdfa-6ca4914a032f","Type":"ContainerDied","Data":"f57310b0d2bbfae9cdc6e79d6cb6e608e8c13a12ac024253b60b3a7ca3cf4764"} Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.502380 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f57310b0d2bbfae9cdc6e79d6cb6e608e8c13a12ac024253b60b3a7ca3cf4764" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.502390 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.571680 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5"] Dec 06 05:59:57 crc kubenswrapper[4958]: E1206 05:59:57.572433 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b5992cb-ba4f-45da-bdfa-6ca4914a032f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.572460 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b5992cb-ba4f-45da-bdfa-6ca4914a032f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.572745 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b5992cb-ba4f-45da-bdfa-6ca4914a032f" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.573827 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.576419 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.576684 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.576875 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.577023 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.592181 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5"] Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.683629 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.683751 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.683931 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twrtd\" (UniqueName: \"kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.786418 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.786931 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.787108 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twrtd\" (UniqueName: \"kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.799832 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.800157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.804871 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twrtd\" (UniqueName: \"kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2wg5\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:57 crc kubenswrapper[4958]: I1206 05:59:57.907488 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 05:59:58 crc kubenswrapper[4958]: W1206 05:59:58.661460 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca7fd004_bb6c_4a8f_b0ac_bf8ab8d95805.slice/crio-83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852 WatchSource:0}: Error finding container 83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852: Status 404 returned error can't find the container with id 83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852 Dec 06 05:59:58 crc kubenswrapper[4958]: I1206 05:59:58.664228 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5"] Dec 06 05:59:59 crc kubenswrapper[4958]: I1206 05:59:59.525062 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" event={"ID":"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805","Type":"ContainerStarted","Data":"608e7643e2d5fb46f805aae36490d80c049083dbad56d84ec26c9747d4a7a769"} Dec 06 05:59:59 crc kubenswrapper[4958]: I1206 05:59:59.525414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" event={"ID":"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805","Type":"ContainerStarted","Data":"83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852"} Dec 06 05:59:59 crc kubenswrapper[4958]: I1206 05:59:59.543270 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" podStartSLOduration=2.049546831 podStartE2EDuration="2.543249267s" podCreationTimestamp="2025-12-06 05:59:57 +0000 UTC" firstStartedPulling="2025-12-06 05:59:58.663958453 +0000 UTC m=+1909.197729216" lastFinishedPulling="2025-12-06 05:59:59.157660879 +0000 UTC m=+1909.691431652" observedRunningTime="2025-12-06 05:59:59.539979348 +0000 UTC m=+1910.073750111" watchObservedRunningTime="2025-12-06 05:59:59.543249267 +0000 UTC m=+1910.077020030" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.147748 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p"] Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.150740 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.153064 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.154288 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.164008 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p"] Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.333796 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xm258\" (UniqueName: \"kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.333901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.333938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.435412 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xm258\" (UniqueName: \"kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.435513 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.435551 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.438736 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.444323 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.454038 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xm258\" (UniqueName: \"kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258\") pod \"collect-profiles-29416680-9lz9p\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.502258 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:00 crc kubenswrapper[4958]: I1206 06:00:00.954727 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p"] Dec 06 06:00:00 crc kubenswrapper[4958]: W1206 06:00:00.963839 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbafcdafc_d17c_4271_ad12_0a8fed0a33cc.slice/crio-0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec WatchSource:0}: Error finding container 0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec: Status 404 returned error can't find the container with id 0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec Dec 06 06:00:01 crc kubenswrapper[4958]: I1206 06:00:01.546141 4958 generic.go:334] "Generic (PLEG): container finished" podID="bafcdafc-d17c-4271-ad12-0a8fed0a33cc" containerID="59527b2596c4b6f68867b7891527a8d54a6151d269013e0563b724d2d390ee52" exitCode=0 Dec 06 06:00:01 crc kubenswrapper[4958]: I1206 06:00:01.546210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" event={"ID":"bafcdafc-d17c-4271-ad12-0a8fed0a33cc","Type":"ContainerDied","Data":"59527b2596c4b6f68867b7891527a8d54a6151d269013e0563b724d2d390ee52"} Dec 06 06:00:01 crc kubenswrapper[4958]: I1206 06:00:01.546561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" event={"ID":"bafcdafc-d17c-4271-ad12-0a8fed0a33cc","Type":"ContainerStarted","Data":"0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec"} Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.556875 4958 generic.go:334] "Generic (PLEG): container finished" podID="ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" containerID="608e7643e2d5fb46f805aae36490d80c049083dbad56d84ec26c9747d4a7a769" exitCode=0 Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.556948 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" event={"ID":"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805","Type":"ContainerDied","Data":"608e7643e2d5fb46f805aae36490d80c049083dbad56d84ec26c9747d4a7a769"} Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.890979 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.983901 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume\") pod \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.983955 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume\") pod \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.984120 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xm258\" (UniqueName: \"kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258\") pod \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\" (UID: \"bafcdafc-d17c-4271-ad12-0a8fed0a33cc\") " Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.984975 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume" (OuterVolumeSpecName: "config-volume") pod "bafcdafc-d17c-4271-ad12-0a8fed0a33cc" (UID: "bafcdafc-d17c-4271-ad12-0a8fed0a33cc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.989707 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258" (OuterVolumeSpecName: "kube-api-access-xm258") pod "bafcdafc-d17c-4271-ad12-0a8fed0a33cc" (UID: "bafcdafc-d17c-4271-ad12-0a8fed0a33cc"). InnerVolumeSpecName "kube-api-access-xm258". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:00:02 crc kubenswrapper[4958]: I1206 06:00:02.991638 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bafcdafc-d17c-4271-ad12-0a8fed0a33cc" (UID: "bafcdafc-d17c-4271-ad12-0a8fed0a33cc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.086913 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.086960 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.086972 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xm258\" (UniqueName: \"kubernetes.io/projected/bafcdafc-d17c-4271-ad12-0a8fed0a33cc-kube-api-access-xm258\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.566969 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" event={"ID":"bafcdafc-d17c-4271-ad12-0a8fed0a33cc","Type":"ContainerDied","Data":"0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec"} Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.567023 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e9cfb97466042b2ef1c8b248c9966e35235f2eb1c859ff12d827ebae7a603ec" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.567024 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.763357 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:00:03 crc kubenswrapper[4958]: E1206 06:00:03.764253 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:00:03 crc kubenswrapper[4958]: I1206 06:00:03.986930 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.106464 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twrtd\" (UniqueName: \"kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd\") pod \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.106737 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory\") pod \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.106855 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key\") pod \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\" (UID: \"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805\") " Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.112099 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd" (OuterVolumeSpecName: "kube-api-access-twrtd") pod "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" (UID: "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805"). InnerVolumeSpecName "kube-api-access-twrtd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.139263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory" (OuterVolumeSpecName: "inventory") pod "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" (UID: "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.141935 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" (UID: "ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.209203 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.209263 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.209275 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twrtd\" (UniqueName: \"kubernetes.io/projected/ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805-kube-api-access-twrtd\") on node \"crc\" DevicePath \"\"" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.577658 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" event={"ID":"ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805","Type":"ContainerDied","Data":"83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852"} Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.577701 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83cea9585ab156d6f66fbb63ccbf58b4ad5b56aa2d62b493bcd7930fb335d852" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.577815 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2wg5" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.653060 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2"] Dec 06 06:00:04 crc kubenswrapper[4958]: E1206 06:00:04.653579 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bafcdafc-d17c-4271-ad12-0a8fed0a33cc" containerName="collect-profiles" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.653598 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bafcdafc-d17c-4271-ad12-0a8fed0a33cc" containerName="collect-profiles" Dec 06 06:00:04 crc kubenswrapper[4958]: E1206 06:00:04.653632 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.653641 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.653854 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bafcdafc-d17c-4271-ad12-0a8fed0a33cc" containerName="collect-profiles" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.653873 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.654559 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.656411 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.656628 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.656697 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.659001 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.663532 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2"] Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.819630 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.819746 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.819782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.819870 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4bvp\" (UniqueName: \"kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.921585 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.921716 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.921758 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.921823 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4bvp\" (UniqueName: \"kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.928589 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.928641 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.928931 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.940173 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4bvp\" (UniqueName: \"kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:04 crc kubenswrapper[4958]: I1206 06:00:04.975920 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:00:05 crc kubenswrapper[4958]: W1206 06:00:05.510304 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2e1b78f3_f2b9_4304_b139_13f156e87cd1.slice/crio-2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908 WatchSource:0}: Error finding container 2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908: Status 404 returned error can't find the container with id 2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908 Dec 06 06:00:05 crc kubenswrapper[4958]: I1206 06:00:05.512240 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2"] Dec 06 06:00:05 crc kubenswrapper[4958]: I1206 06:00:05.587458 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" event={"ID":"2e1b78f3-f2b9-4304-b139-13f156e87cd1","Type":"ContainerStarted","Data":"2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908"} Dec 06 06:00:07 crc kubenswrapper[4958]: I1206 06:00:07.620500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" event={"ID":"2e1b78f3-f2b9-4304-b139-13f156e87cd1","Type":"ContainerStarted","Data":"a89a0a7a80e6de31f2d0aa9ec33bc28a6979843b8bf8d6b0fafdc74825645015"} Dec 06 06:00:08 crc kubenswrapper[4958]: I1206 06:00:08.654708 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" podStartSLOduration=3.098880492 podStartE2EDuration="4.654678868s" podCreationTimestamp="2025-12-06 06:00:04 +0000 UTC" firstStartedPulling="2025-12-06 06:00:05.512855063 +0000 UTC m=+1916.046625826" lastFinishedPulling="2025-12-06 06:00:07.068653439 +0000 UTC m=+1917.602424202" observedRunningTime="2025-12-06 06:00:08.650544595 +0000 UTC m=+1919.184315418" watchObservedRunningTime="2025-12-06 06:00:08.654678868 +0000 UTC m=+1919.188449671" Dec 06 06:00:15 crc kubenswrapper[4958]: I1206 06:00:15.762679 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:00:15 crc kubenswrapper[4958]: E1206 06:00:15.763561 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:00:26 crc kubenswrapper[4958]: I1206 06:00:26.761956 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:00:26 crc kubenswrapper[4958]: E1206 06:00:26.762808 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:00:39 crc kubenswrapper[4958]: I1206 06:00:39.770231 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:00:39 crc kubenswrapper[4958]: E1206 06:00:39.771361 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:00:53 crc kubenswrapper[4958]: I1206 06:00:53.761982 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:00:53 crc kubenswrapper[4958]: E1206 06:00:53.762803 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.171380 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416681-29sbf"] Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.173440 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.197450 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416681-29sbf"] Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.329417 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.329822 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm8mh\" (UniqueName: \"kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.329901 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.330068 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.432125 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.432181 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.432263 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.432312 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm8mh\" (UniqueName: \"kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.439139 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.439315 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.440272 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.450908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm8mh\" (UniqueName: \"kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh\") pod \"keystone-cron-29416681-29sbf\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.512253 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:00 crc kubenswrapper[4958]: I1206 06:01:00.971961 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416681-29sbf"] Dec 06 06:01:01 crc kubenswrapper[4958]: I1206 06:01:01.126609 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-588cbd45c9-xblwx" podUID="aae69e62-83f7-47d4-aecd-e883ed84a6ac" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 06 06:01:01 crc kubenswrapper[4958]: I1206 06:01:01.137556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416681-29sbf" event={"ID":"3e1be52c-6724-4ce4-af65-1e3554f51d20","Type":"ContainerStarted","Data":"46852d42199b48e600d894345c4c9860945677634053d6efa1668300d57207b6"} Dec 06 06:01:02 crc kubenswrapper[4958]: I1206 06:01:02.151447 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416681-29sbf" event={"ID":"3e1be52c-6724-4ce4-af65-1e3554f51d20","Type":"ContainerStarted","Data":"7a86e05754ab08c1dc23ee5dcbf47b2a7a7d614581c428042658aa6b02c509ce"} Dec 06 06:01:02 crc kubenswrapper[4958]: I1206 06:01:02.177264 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416681-29sbf" podStartSLOduration=2.177246207 podStartE2EDuration="2.177246207s" podCreationTimestamp="2025-12-06 06:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:01:02.169977329 +0000 UTC m=+1972.703748102" watchObservedRunningTime="2025-12-06 06:01:02.177246207 +0000 UTC m=+1972.711016970" Dec 06 06:01:04 crc kubenswrapper[4958]: I1206 06:01:04.172505 4958 generic.go:334] "Generic (PLEG): container finished" podID="3e1be52c-6724-4ce4-af65-1e3554f51d20" containerID="7a86e05754ab08c1dc23ee5dcbf47b2a7a7d614581c428042658aa6b02c509ce" exitCode=0 Dec 06 06:01:04 crc kubenswrapper[4958]: I1206 06:01:04.172591 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416681-29sbf" event={"ID":"3e1be52c-6724-4ce4-af65-1e3554f51d20","Type":"ContainerDied","Data":"7a86e05754ab08c1dc23ee5dcbf47b2a7a7d614581c428042658aa6b02c509ce"} Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.565055 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.741888 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys\") pod \"3e1be52c-6724-4ce4-af65-1e3554f51d20\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.742103 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data\") pod \"3e1be52c-6724-4ce4-af65-1e3554f51d20\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.742134 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle\") pod \"3e1be52c-6724-4ce4-af65-1e3554f51d20\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.742455 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm8mh\" (UniqueName: \"kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh\") pod \"3e1be52c-6724-4ce4-af65-1e3554f51d20\" (UID: \"3e1be52c-6724-4ce4-af65-1e3554f51d20\") " Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.747305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "3e1be52c-6724-4ce4-af65-1e3554f51d20" (UID: "3e1be52c-6724-4ce4-af65-1e3554f51d20"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.749691 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh" (OuterVolumeSpecName: "kube-api-access-vm8mh") pod "3e1be52c-6724-4ce4-af65-1e3554f51d20" (UID: "3e1be52c-6724-4ce4-af65-1e3554f51d20"). InnerVolumeSpecName "kube-api-access-vm8mh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.762616 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:01:05 crc kubenswrapper[4958]: E1206 06:01:05.763049 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.773985 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3e1be52c-6724-4ce4-af65-1e3554f51d20" (UID: "3e1be52c-6724-4ce4-af65-1e3554f51d20"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.844168 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data" (OuterVolumeSpecName: "config-data") pod "3e1be52c-6724-4ce4-af65-1e3554f51d20" (UID: "3e1be52c-6724-4ce4-af65-1e3554f51d20"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.846587 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm8mh\" (UniqueName: \"kubernetes.io/projected/3e1be52c-6724-4ce4-af65-1e3554f51d20-kube-api-access-vm8mh\") on node \"crc\" DevicePath \"\"" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.846615 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.846644 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 06:01:05 crc kubenswrapper[4958]: I1206 06:01:05.846671 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3e1be52c-6724-4ce4-af65-1e3554f51d20-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:01:06 crc kubenswrapper[4958]: I1206 06:01:06.191591 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416681-29sbf" event={"ID":"3e1be52c-6724-4ce4-af65-1e3554f51d20","Type":"ContainerDied","Data":"46852d42199b48e600d894345c4c9860945677634053d6efa1668300d57207b6"} Dec 06 06:01:06 crc kubenswrapper[4958]: I1206 06:01:06.191629 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="46852d42199b48e600d894345c4c9860945677634053d6efa1668300d57207b6" Dec 06 06:01:06 crc kubenswrapper[4958]: I1206 06:01:06.191642 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416681-29sbf" Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.069274 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-4069-account-create-update-b92cg"] Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.084247 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-mdznl"] Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.096735 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-mdznl"] Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.106856 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-4069-account-create-update-b92cg"] Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.774792 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eddd3fa-04bf-46c5-984a-c5227d951195" path="/var/lib/kubelet/pods/5eddd3fa-04bf-46c5-984a-c5227d951195/volumes" Dec 06 06:01:09 crc kubenswrapper[4958]: I1206 06:01:09.775578 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88549045-eefd-497a-b779-8689cec8daa9" path="/var/lib/kubelet/pods/88549045-eefd-497a-b779-8689cec8daa9/volumes" Dec 06 06:01:10 crc kubenswrapper[4958]: I1206 06:01:10.031655 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-gl9kl"] Dec 06 06:01:10 crc kubenswrapper[4958]: I1206 06:01:10.041442 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-65dd-account-create-update-qtz4w"] Dec 06 06:01:10 crc kubenswrapper[4958]: I1206 06:01:10.049681 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-gl9kl"] Dec 06 06:01:10 crc kubenswrapper[4958]: I1206 06:01:10.058689 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-65dd-account-create-update-qtz4w"] Dec 06 06:01:11 crc kubenswrapper[4958]: I1206 06:01:11.779409 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674bc2ce-08e0-49b2-850c-2a00e8e38faa" path="/var/lib/kubelet/pods/674bc2ce-08e0-49b2-850c-2a00e8e38faa/volumes" Dec 06 06:01:11 crc kubenswrapper[4958]: I1206 06:01:11.780926 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a573deaf-5c61-4ca3-97a6-c29d9ea40c29" path="/var/lib/kubelet/pods/a573deaf-5c61-4ca3-97a6-c29d9ea40c29/volumes" Dec 06 06:01:12 crc kubenswrapper[4958]: I1206 06:01:12.026321 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-create-nq6qf"] Dec 06 06:01:12 crc kubenswrapper[4958]: I1206 06:01:12.036580 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-create-nq6qf"] Dec 06 06:01:13 crc kubenswrapper[4958]: I1206 06:01:13.030602 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-9aa9-account-create-update-pvnth"] Dec 06 06:01:13 crc kubenswrapper[4958]: I1206 06:01:13.040285 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-9aa9-account-create-update-pvnth"] Dec 06 06:01:13 crc kubenswrapper[4958]: I1206 06:01:13.777777 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12" path="/var/lib/kubelet/pods/0c3fe7c7-dcce-48c1-b71d-bc59f7fd1b12/volumes" Dec 06 06:01:13 crc kubenswrapper[4958]: I1206 06:01:13.779606 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef244a26-b26b-4f8d-addc-9772f3134412" path="/var/lib/kubelet/pods/ef244a26-b26b-4f8d-addc-9772f3134412/volumes" Dec 06 06:01:17 crc kubenswrapper[4958]: I1206 06:01:17.762430 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:01:17 crc kubenswrapper[4958]: E1206 06:01:17.763225 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:01:27 crc kubenswrapper[4958]: I1206 06:01:27.713321 4958 scope.go:117] "RemoveContainer" containerID="abb43ee5837056814c38308ef686d47eee7dc215b358853ea620c3ccfded466d" Dec 06 06:01:27 crc kubenswrapper[4958]: I1206 06:01:27.746281 4958 scope.go:117] "RemoveContainer" containerID="d5afa864150a18667071d0b2215ff211199f473c07058f36086b431c0ed278db" Dec 06 06:01:27 crc kubenswrapper[4958]: I1206 06:01:27.813455 4958 scope.go:117] "RemoveContainer" containerID="6ebf32502dd4f048793851af86282d35f1487580821032ee42bcdba69669c4f6" Dec 06 06:01:27 crc kubenswrapper[4958]: I1206 06:01:27.860867 4958 scope.go:117] "RemoveContainer" containerID="44f22053ed1a81671539c116b8b7ae5052de56ce4c6eb3c516c39ed75b88d61c" Dec 06 06:01:27 crc kubenswrapper[4958]: I1206 06:01:27.977865 4958 scope.go:117] "RemoveContainer" containerID="e309a834f9840ea280266c8e444d54ef489385c6d1f30a72f6321342c0c7d2fa" Dec 06 06:01:28 crc kubenswrapper[4958]: I1206 06:01:28.005017 4958 scope.go:117] "RemoveContainer" containerID="720acfc8796ca62ce00f83e6d37f0d00de571dd93c85c06b8a9eadac40dd9efe" Dec 06 06:01:32 crc kubenswrapper[4958]: I1206 06:01:32.762423 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:01:32 crc kubenswrapper[4958]: E1206 06:01:32.763174 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.041094 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-b0af-account-create-update-2sf4j"] Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.051154 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-b0af-account-create-update-2sf4j"] Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.061005 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jdshg"] Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.069927 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jdshg"] Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.773071 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f779858-5c94-4cbe-a940-d81de4d26b69" path="/var/lib/kubelet/pods/2f779858-5c94-4cbe-a940-d81de4d26b69/volumes" Dec 06 06:01:41 crc kubenswrapper[4958]: I1206 06:01:41.773774 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c98d83aa-c74d-4bc3-bbbd-4fcb700f964d" path="/var/lib/kubelet/pods/c98d83aa-c74d-4bc3-bbbd-4fcb700f964d/volumes" Dec 06 06:01:45 crc kubenswrapper[4958]: I1206 06:01:45.762326 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:01:46 crc kubenswrapper[4958]: I1206 06:01:46.583208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6"} Dec 06 06:01:48 crc kubenswrapper[4958]: I1206 06:01:48.054615 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-3343-account-create-update-kng7l"] Dec 06 06:01:48 crc kubenswrapper[4958]: I1206 06:01:48.075555 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7910-account-create-update-ng4vj"] Dec 06 06:01:48 crc kubenswrapper[4958]: I1206 06:01:48.084145 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-3343-account-create-update-kng7l"] Dec 06 06:01:48 crc kubenswrapper[4958]: I1206 06:01:48.091737 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7910-account-create-update-ng4vj"] Dec 06 06:01:49 crc kubenswrapper[4958]: I1206 06:01:49.777321 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1" path="/var/lib/kubelet/pods/3954a2c3-13eb-442c-bce2-bcbb8e3c6eb1/volumes" Dec 06 06:01:49 crc kubenswrapper[4958]: I1206 06:01:49.779171 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c9f41cd-4696-4b14-a48d-b202f0d6796b" path="/var/lib/kubelet/pods/5c9f41cd-4696-4b14-a48d-b202f0d6796b/volumes" Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.031515 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-52pk5"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.040742 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-qzlg2"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.049794 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-97fb-account-create-update-htb6t"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.057299 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-g92gz"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.064795 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-qzlg2"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.073427 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-97fb-account-create-update-htb6t"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.081241 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-g92gz"] Dec 06 06:01:52 crc kubenswrapper[4958]: I1206 06:01:52.089378 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-52pk5"] Dec 06 06:01:53 crc kubenswrapper[4958]: I1206 06:01:53.772572 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261f5bc1-8806-430c-bbca-2142d542071d" path="/var/lib/kubelet/pods/261f5bc1-8806-430c-bbca-2142d542071d/volumes" Dec 06 06:01:53 crc kubenswrapper[4958]: I1206 06:01:53.774572 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="437479ce-fa34-40d2-af1c-a611eaaecc20" path="/var/lib/kubelet/pods/437479ce-fa34-40d2-af1c-a611eaaecc20/volumes" Dec 06 06:01:53 crc kubenswrapper[4958]: I1206 06:01:53.775267 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ea1e14-d33a-45cd-bc32-655d29f95017" path="/var/lib/kubelet/pods/48ea1e14-d33a-45cd-bc32-655d29f95017/volumes" Dec 06 06:01:53 crc kubenswrapper[4958]: I1206 06:01:53.775981 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94f50bb5-d668-42ce-b9da-9364fcf27a33" path="/var/lib/kubelet/pods/94f50bb5-d668-42ce-b9da-9364fcf27a33/volumes" Dec 06 06:02:23 crc kubenswrapper[4958]: I1206 06:02:23.044241 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/watcher-db-sync-7mfjj"] Dec 06 06:02:23 crc kubenswrapper[4958]: I1206 06:02:23.056536 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/watcher-db-sync-7mfjj"] Dec 06 06:02:23 crc kubenswrapper[4958]: I1206 06:02:23.774775 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35cf87c8-3462-476a-b396-26a24e954229" path="/var/lib/kubelet/pods/35cf87c8-3462-476a-b396-26a24e954229/volumes" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.175417 4958 scope.go:117] "RemoveContainer" containerID="35c3a25e2bf0e6f0d21e00c4854df1ca26f01197af270b66aa3fd10f1e0c8e76" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.203727 4958 scope.go:117] "RemoveContainer" containerID="4f03ec094cabc830abc0a6f6b61d702f067f9049b81fee717461cc82a365d74f" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.259327 4958 scope.go:117] "RemoveContainer" containerID="74bfa5e542e1f9319b59381ca932162675ff4c6feb60218cb029601cc8061269" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.304793 4958 scope.go:117] "RemoveContainer" containerID="8cadc66ee34e77dd52f36424defacf2c15137c82c2739b6501f014b8bf5edba8" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.353716 4958 scope.go:117] "RemoveContainer" containerID="013af68c6551086becee1b006abef6d349e5b62191a009e81d29bd4e3334b796" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.402906 4958 scope.go:117] "RemoveContainer" containerID="5382d69285f46677e1fd7aac87c968605b36f0534dbd31068498e940114f14ac" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.462151 4958 scope.go:117] "RemoveContainer" containerID="7c392effbb0545f503158b746e3db190f825eb641c79c693a1a7ec363541aaf3" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.494028 4958 scope.go:117] "RemoveContainer" containerID="649c0312e39db5ce1296558b3f583c1b63a9cf92dce3148cbcb0c7c186b9dbfc" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.514700 4958 scope.go:117] "RemoveContainer" containerID="e3ab2abb39669337388050cf9866bbd388d923c83c3e31f0c0be4632e8513b7d" Dec 06 06:02:28 crc kubenswrapper[4958]: I1206 06:02:28.535787 4958 scope.go:117] "RemoveContainer" containerID="f310b8d3468992f4cd4c60571fca9d866ff7bd03f0c133d7d908bf398ab47d30" Dec 06 06:02:32 crc kubenswrapper[4958]: I1206 06:02:32.029448 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-l4znm"] Dec 06 06:02:32 crc kubenswrapper[4958]: I1206 06:02:32.045409 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-l4znm"] Dec 06 06:02:33 crc kubenswrapper[4958]: I1206 06:02:33.773309 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b244aca-d463-42f1-b8f9-d96dca44f635" path="/var/lib/kubelet/pods/7b244aca-d463-42f1-b8f9-d96dca44f635/volumes" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.541885 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:39 crc kubenswrapper[4958]: E1206 06:02:39.543157 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e1be52c-6724-4ce4-af65-1e3554f51d20" containerName="keystone-cron" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.543182 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e1be52c-6724-4ce4-af65-1e3554f51d20" containerName="keystone-cron" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.543542 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e1be52c-6724-4ce4-af65-1e3554f51d20" containerName="keystone-cron" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.545951 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.555134 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.632883 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnppz\" (UniqueName: \"kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.633257 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.633393 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.734610 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.734671 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.734758 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnppz\" (UniqueName: \"kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.735105 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.735383 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.757873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnppz\" (UniqueName: \"kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz\") pod \"redhat-marketplace-smfz8\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:39 crc kubenswrapper[4958]: I1206 06:02:39.876143 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:40 crc kubenswrapper[4958]: I1206 06:02:40.339573 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:41 crc kubenswrapper[4958]: I1206 06:02:41.120529 4958 generic.go:334] "Generic (PLEG): container finished" podID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerID="9265b0ee6a71cdba44d44b18e59c71a31156044a019b0170e8d9843e2d7149c9" exitCode=0 Dec 06 06:02:41 crc kubenswrapper[4958]: I1206 06:02:41.120767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerDied","Data":"9265b0ee6a71cdba44d44b18e59c71a31156044a019b0170e8d9843e2d7149c9"} Dec 06 06:02:41 crc kubenswrapper[4958]: I1206 06:02:41.120794 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerStarted","Data":"2892a1c9d34dd8f88a1db4d646ddc7f5268c0e7629f7b143ea3580742da0280e"} Dec 06 06:02:41 crc kubenswrapper[4958]: I1206 06:02:41.133709 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:02:42 crc kubenswrapper[4958]: I1206 06:02:42.131245 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerStarted","Data":"429e92a33e1308140fd732a8abf341f387ba04822a9df593a42ea146fb7cbc31"} Dec 06 06:02:43 crc kubenswrapper[4958]: I1206 06:02:43.141081 4958 generic.go:334] "Generic (PLEG): container finished" podID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerID="429e92a33e1308140fd732a8abf341f387ba04822a9df593a42ea146fb7cbc31" exitCode=0 Dec 06 06:02:43 crc kubenswrapper[4958]: I1206 06:02:43.141184 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerDied","Data":"429e92a33e1308140fd732a8abf341f387ba04822a9df593a42ea146fb7cbc31"} Dec 06 06:02:44 crc kubenswrapper[4958]: I1206 06:02:44.153844 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerStarted","Data":"4f696a1f94fb5c3bdf839a6b941f585f17197f804e0d250aa868d89c1adb1b14"} Dec 06 06:02:44 crc kubenswrapper[4958]: I1206 06:02:44.182833 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-smfz8" podStartSLOduration=2.75609993 podStartE2EDuration="5.182810663s" podCreationTimestamp="2025-12-06 06:02:39 +0000 UTC" firstStartedPulling="2025-12-06 06:02:41.123958002 +0000 UTC m=+2071.657728765" lastFinishedPulling="2025-12-06 06:02:43.550668735 +0000 UTC m=+2074.084439498" observedRunningTime="2025-12-06 06:02:44.170901049 +0000 UTC m=+2074.704671812" watchObservedRunningTime="2025-12-06 06:02:44.182810663 +0000 UTC m=+2074.716581426" Dec 06 06:02:49 crc kubenswrapper[4958]: I1206 06:02:49.876758 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:49 crc kubenswrapper[4958]: I1206 06:02:49.878114 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:49 crc kubenswrapper[4958]: I1206 06:02:49.928610 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:50 crc kubenswrapper[4958]: I1206 06:02:50.279545 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:50 crc kubenswrapper[4958]: I1206 06:02:50.333092 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:52 crc kubenswrapper[4958]: I1206 06:02:52.251622 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-smfz8" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="registry-server" containerID="cri-o://4f696a1f94fb5c3bdf839a6b941f585f17197f804e0d250aa868d89c1adb1b14" gracePeriod=2 Dec 06 06:02:53 crc kubenswrapper[4958]: I1206 06:02:53.266598 4958 generic.go:334] "Generic (PLEG): container finished" podID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerID="4f696a1f94fb5c3bdf839a6b941f585f17197f804e0d250aa868d89c1adb1b14" exitCode=0 Dec 06 06:02:53 crc kubenswrapper[4958]: I1206 06:02:53.266701 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerDied","Data":"4f696a1f94fb5c3bdf839a6b941f585f17197f804e0d250aa868d89c1adb1b14"} Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.047201 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.161688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content\") pod \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.161777 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities\") pod \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.161836 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnppz\" (UniqueName: \"kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz\") pod \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\" (UID: \"69bb9590-4d4f-4f2b-b172-e337bebd27d4\") " Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.162857 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities" (OuterVolumeSpecName: "utilities") pod "69bb9590-4d4f-4f2b-b172-e337bebd27d4" (UID: "69bb9590-4d4f-4f2b-b172-e337bebd27d4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.163530 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.168831 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz" (OuterVolumeSpecName: "kube-api-access-jnppz") pod "69bb9590-4d4f-4f2b-b172-e337bebd27d4" (UID: "69bb9590-4d4f-4f2b-b172-e337bebd27d4"). InnerVolumeSpecName "kube-api-access-jnppz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.189499 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "69bb9590-4d4f-4f2b-b172-e337bebd27d4" (UID: "69bb9590-4d4f-4f2b-b172-e337bebd27d4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.266364 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/69bb9590-4d4f-4f2b-b172-e337bebd27d4-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.266420 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnppz\" (UniqueName: \"kubernetes.io/projected/69bb9590-4d4f-4f2b-b172-e337bebd27d4-kube-api-access-jnppz\") on node \"crc\" DevicePath \"\"" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.280790 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-smfz8" event={"ID":"69bb9590-4d4f-4f2b-b172-e337bebd27d4","Type":"ContainerDied","Data":"2892a1c9d34dd8f88a1db4d646ddc7f5268c0e7629f7b143ea3580742da0280e"} Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.280876 4958 scope.go:117] "RemoveContainer" containerID="4f696a1f94fb5c3bdf839a6b941f585f17197f804e0d250aa868d89c1adb1b14" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.281062 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-smfz8" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.316994 4958 scope.go:117] "RemoveContainer" containerID="429e92a33e1308140fd732a8abf341f387ba04822a9df593a42ea146fb7cbc31" Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.342198 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.372625 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-smfz8"] Dec 06 06:02:54 crc kubenswrapper[4958]: I1206 06:02:54.378315 4958 scope.go:117] "RemoveContainer" containerID="9265b0ee6a71cdba44d44b18e59c71a31156044a019b0170e8d9843e2d7149c9" Dec 06 06:02:55 crc kubenswrapper[4958]: I1206 06:02:55.777258 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" path="/var/lib/kubelet/pods/69bb9590-4d4f-4f2b-b172-e337bebd27d4/volumes" Dec 06 06:03:09 crc kubenswrapper[4958]: I1206 06:03:09.040751 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-jdfnk"] Dec 06 06:03:09 crc kubenswrapper[4958]: I1206 06:03:09.050040 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-jdfnk"] Dec 06 06:03:09 crc kubenswrapper[4958]: I1206 06:03:09.824723 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d1dc22d-53a9-4aee-989b-fc253cd276cd" path="/var/lib/kubelet/pods/9d1dc22d-53a9-4aee-989b-fc253cd276cd/volumes" Dec 06 06:03:28 crc kubenswrapper[4958]: I1206 06:03:28.719980 4958 scope.go:117] "RemoveContainer" containerID="4fb815c2a706f214a8537ed307b11c2a0bb67231931011937e5d016c3fe89571" Dec 06 06:03:28 crc kubenswrapper[4958]: I1206 06:03:28.754008 4958 scope.go:117] "RemoveContainer" containerID="19985583f93e44de7c0b244fde720dd3c31cec7252770a6aba20b5f0c149d448" Dec 06 06:03:46 crc kubenswrapper[4958]: I1206 06:03:46.787351 4958 generic.go:334] "Generic (PLEG): container finished" podID="2e1b78f3-f2b9-4304-b139-13f156e87cd1" containerID="a89a0a7a80e6de31f2d0aa9ec33bc28a6979843b8bf8d6b0fafdc74825645015" exitCode=0 Dec 06 06:03:46 crc kubenswrapper[4958]: I1206 06:03:46.787431 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" event={"ID":"2e1b78f3-f2b9-4304-b139-13f156e87cd1","Type":"ContainerDied","Data":"a89a0a7a80e6de31f2d0aa9ec33bc28a6979843b8bf8d6b0fafdc74825645015"} Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.066525 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-wf7cs"] Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.075516 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-wf7cs"] Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.166105 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:03:47 crc kubenswrapper[4958]: E1206 06:03:47.166624 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="registry-server" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.166648 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="registry-server" Dec 06 06:03:47 crc kubenswrapper[4958]: E1206 06:03:47.166669 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="extract-utilities" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.166677 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="extract-utilities" Dec 06 06:03:47 crc kubenswrapper[4958]: E1206 06:03:47.166697 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="extract-content" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.166707 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="extract-content" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.166950 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="69bb9590-4d4f-4f2b-b172-e337bebd27d4" containerName="registry-server" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.172600 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.180080 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.336425 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.336556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.336640 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ncwh\" (UniqueName: \"kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.438541 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.438613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.438674 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ncwh\" (UniqueName: \"kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.439059 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.439224 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.472290 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ncwh\" (UniqueName: \"kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh\") pod \"certified-operators-tv5wr\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.502881 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:47 crc kubenswrapper[4958]: I1206 06:03:47.812148 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="374b8326-0ba7-46d1-b438-85a5e865fdb5" path="/var/lib/kubelet/pods/374b8326-0ba7-46d1-b438-85a5e865fdb5/volumes" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.003634 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.267413 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.379214 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle\") pod \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.379414 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4bvp\" (UniqueName: \"kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp\") pod \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.379599 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key\") pod \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.379692 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory\") pod \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\" (UID: \"2e1b78f3-f2b9-4304-b139-13f156e87cd1\") " Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.386006 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp" (OuterVolumeSpecName: "kube-api-access-v4bvp") pod "2e1b78f3-f2b9-4304-b139-13f156e87cd1" (UID: "2e1b78f3-f2b9-4304-b139-13f156e87cd1"). InnerVolumeSpecName "kube-api-access-v4bvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.387068 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "2e1b78f3-f2b9-4304-b139-13f156e87cd1" (UID: "2e1b78f3-f2b9-4304-b139-13f156e87cd1"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.412309 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2e1b78f3-f2b9-4304-b139-13f156e87cd1" (UID: "2e1b78f3-f2b9-4304-b139-13f156e87cd1"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.414759 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory" (OuterVolumeSpecName: "inventory") pod "2e1b78f3-f2b9-4304-b139-13f156e87cd1" (UID: "2e1b78f3-f2b9-4304-b139-13f156e87cd1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.483837 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.483876 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4bvp\" (UniqueName: \"kubernetes.io/projected/2e1b78f3-f2b9-4304-b139-13f156e87cd1-kube-api-access-v4bvp\") on node \"crc\" DevicePath \"\"" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.483890 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.483901 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2e1b78f3-f2b9-4304-b139-13f156e87cd1-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.816311 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.816339 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2" event={"ID":"2e1b78f3-f2b9-4304-b139-13f156e87cd1","Type":"ContainerDied","Data":"2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908"} Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.816696 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cef38069a6dd45df38ff39a42e0af63b496796dc1b93fa242bd6a68813ee908" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.818005 4958 generic.go:334] "Generic (PLEG): container finished" podID="780640d2-f29f-4b1a-a990-69dd303ed455" containerID="a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b" exitCode=0 Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.818042 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerDied","Data":"a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b"} Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.818060 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerStarted","Data":"11871942770a09cc038bfc968fa41ee5ff8ac4c56f9f169149a2e4ebccb2bf87"} Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.900246 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml"] Dec 06 06:03:48 crc kubenswrapper[4958]: E1206 06:03:48.900746 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e1b78f3-f2b9-4304-b139-13f156e87cd1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.900771 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e1b78f3-f2b9-4304-b139-13f156e87cd1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.901030 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e1b78f3-f2b9-4304-b139-13f156e87cd1" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.901839 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.907021 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.907318 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.907441 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.907723 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:03:48 crc kubenswrapper[4958]: I1206 06:03:48.910393 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml"] Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.094174 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.094276 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftxpp\" (UniqueName: \"kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.094360 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.196043 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.196340 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.196596 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftxpp\" (UniqueName: \"kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.201708 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.202547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.213651 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftxpp\" (UniqueName: \"kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.240072 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.750303 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml"] Dec 06 06:03:49 crc kubenswrapper[4958]: I1206 06:03:49.825949 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" event={"ID":"4f84a870-b9e0-49e4-847b-71322d38a901","Type":"ContainerStarted","Data":"c3edf76cf007c484b70e532626abe354da6f339578b82694dd098536ed0dea19"} Dec 06 06:03:50 crc kubenswrapper[4958]: I1206 06:03:50.836291 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerStarted","Data":"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893"} Dec 06 06:03:51 crc kubenswrapper[4958]: I1206 06:03:51.845575 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" event={"ID":"4f84a870-b9e0-49e4-847b-71322d38a901","Type":"ContainerStarted","Data":"0fea946b1b5b41b3c564f15b2a2afc871613989f9f84edb26ea3be33acc22df4"} Dec 06 06:03:51 crc kubenswrapper[4958]: I1206 06:03:51.849127 4958 generic.go:334] "Generic (PLEG): container finished" podID="780640d2-f29f-4b1a-a990-69dd303ed455" containerID="7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893" exitCode=0 Dec 06 06:03:51 crc kubenswrapper[4958]: I1206 06:03:51.849187 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerDied","Data":"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893"} Dec 06 06:03:51 crc kubenswrapper[4958]: I1206 06:03:51.863870 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" podStartSLOduration=2.298462144 podStartE2EDuration="3.863851903s" podCreationTimestamp="2025-12-06 06:03:48 +0000 UTC" firstStartedPulling="2025-12-06 06:03:49.758507673 +0000 UTC m=+2140.292278436" lastFinishedPulling="2025-12-06 06:03:51.323897392 +0000 UTC m=+2141.857668195" observedRunningTime="2025-12-06 06:03:51.863425021 +0000 UTC m=+2142.397195784" watchObservedRunningTime="2025-12-06 06:03:51.863851903 +0000 UTC m=+2142.397622666" Dec 06 06:03:55 crc kubenswrapper[4958]: I1206 06:03:55.886813 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerStarted","Data":"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4"} Dec 06 06:03:55 crc kubenswrapper[4958]: I1206 06:03:55.911161 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tv5wr" podStartSLOduration=5.12157907 podStartE2EDuration="8.911142893s" podCreationTimestamp="2025-12-06 06:03:47 +0000 UTC" firstStartedPulling="2025-12-06 06:03:48.82592439 +0000 UTC m=+2139.359695153" lastFinishedPulling="2025-12-06 06:03:52.615488213 +0000 UTC m=+2143.149258976" observedRunningTime="2025-12-06 06:03:55.90471267 +0000 UTC m=+2146.438483463" watchObservedRunningTime="2025-12-06 06:03:55.911142893 +0000 UTC m=+2146.444913656" Dec 06 06:03:57 crc kubenswrapper[4958]: I1206 06:03:57.503600 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:57 crc kubenswrapper[4958]: I1206 06:03:57.503929 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:03:57 crc kubenswrapper[4958]: I1206 06:03:57.562177 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:04:07 crc kubenswrapper[4958]: I1206 06:04:07.552218 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:04:07 crc kubenswrapper[4958]: I1206 06:04:07.603866 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.003811 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-tv5wr" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="registry-server" containerID="cri-o://3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4" gracePeriod=2 Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.507520 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.545023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ncwh\" (UniqueName: \"kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh\") pod \"780640d2-f29f-4b1a-a990-69dd303ed455\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.547652 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content\") pod \"780640d2-f29f-4b1a-a990-69dd303ed455\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.547856 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities\") pod \"780640d2-f29f-4b1a-a990-69dd303ed455\" (UID: \"780640d2-f29f-4b1a-a990-69dd303ed455\") " Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.549403 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities" (OuterVolumeSpecName: "utilities") pod "780640d2-f29f-4b1a-a990-69dd303ed455" (UID: "780640d2-f29f-4b1a-a990-69dd303ed455"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.551025 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh" (OuterVolumeSpecName: "kube-api-access-8ncwh") pod "780640d2-f29f-4b1a-a990-69dd303ed455" (UID: "780640d2-f29f-4b1a-a990-69dd303ed455"). InnerVolumeSpecName "kube-api-access-8ncwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.620116 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "780640d2-f29f-4b1a-a990-69dd303ed455" (UID: "780640d2-f29f-4b1a-a990-69dd303ed455"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.650857 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.650908 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780640d2-f29f-4b1a-a990-69dd303ed455-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:04:08 crc kubenswrapper[4958]: I1206 06:04:08.650924 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ncwh\" (UniqueName: \"kubernetes.io/projected/780640d2-f29f-4b1a-a990-69dd303ed455-kube-api-access-8ncwh\") on node \"crc\" DevicePath \"\"" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.015047 4958 generic.go:334] "Generic (PLEG): container finished" podID="780640d2-f29f-4b1a-a990-69dd303ed455" containerID="3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4" exitCode=0 Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.015118 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tv5wr" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.015109 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerDied","Data":"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4"} Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.015186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tv5wr" event={"ID":"780640d2-f29f-4b1a-a990-69dd303ed455","Type":"ContainerDied","Data":"11871942770a09cc038bfc968fa41ee5ff8ac4c56f9f169149a2e4ebccb2bf87"} Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.015213 4958 scope.go:117] "RemoveContainer" containerID="3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.037050 4958 scope.go:117] "RemoveContainer" containerID="7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.057433 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.067206 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-tv5wr"] Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.094551 4958 scope.go:117] "RemoveContainer" containerID="a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.120325 4958 scope.go:117] "RemoveContainer" containerID="3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4" Dec 06 06:04:09 crc kubenswrapper[4958]: E1206 06:04:09.120873 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4\": container with ID starting with 3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4 not found: ID does not exist" containerID="3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.120912 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4"} err="failed to get container status \"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4\": rpc error: code = NotFound desc = could not find container \"3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4\": container with ID starting with 3f5ff191dba969ab0fcb2a82a9fd02ce0f594f56028ae88a95acf2345ed172b4 not found: ID does not exist" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.120940 4958 scope.go:117] "RemoveContainer" containerID="7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893" Dec 06 06:04:09 crc kubenswrapper[4958]: E1206 06:04:09.121163 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893\": container with ID starting with 7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893 not found: ID does not exist" containerID="7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.121190 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893"} err="failed to get container status \"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893\": rpc error: code = NotFound desc = could not find container \"7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893\": container with ID starting with 7e74ff081d0793516bc30d86eab96670a619d927fd099f0607cde58c019f9893 not found: ID does not exist" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.121206 4958 scope.go:117] "RemoveContainer" containerID="a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b" Dec 06 06:04:09 crc kubenswrapper[4958]: E1206 06:04:09.121579 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b\": container with ID starting with a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b not found: ID does not exist" containerID="a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.121610 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b"} err="failed to get container status \"a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b\": rpc error: code = NotFound desc = could not find container \"a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b\": container with ID starting with a5cdd2a7dfab7660759363d4ced1b72a94114efbea29c39352991f291531869b not found: ID does not exist" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.776730 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" path="/var/lib/kubelet/pods/780640d2-f29f-4b1a-a990-69dd303ed455/volumes" Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.866930 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:04:09 crc kubenswrapper[4958]: I1206 06:04:09.867048 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:04:28 crc kubenswrapper[4958]: I1206 06:04:28.890307 4958 scope.go:117] "RemoveContainer" containerID="7ca0ad2f546948a6763d4373471301556d03fad318057c21901da48975099837" Dec 06 06:04:39 crc kubenswrapper[4958]: I1206 06:04:39.866404 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:04:39 crc kubenswrapper[4958]: I1206 06:04:39.867088 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.816117 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:04:57 crc kubenswrapper[4958]: E1206 06:04:57.817117 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="registry-server" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.817128 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="registry-server" Dec 06 06:04:57 crc kubenswrapper[4958]: E1206 06:04:57.817162 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="extract-utilities" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.817169 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="extract-utilities" Dec 06 06:04:57 crc kubenswrapper[4958]: E1206 06:04:57.817177 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="extract-content" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.817183 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="extract-content" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.817378 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="780640d2-f29f-4b1a-a990-69dd303ed455" containerName="registry-server" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.818794 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.824431 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.927250 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.927729 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fz6l\" (UniqueName: \"kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:57 crc kubenswrapper[4958]: I1206 06:04:57.927891 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.029825 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9fz6l\" (UniqueName: \"kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.029974 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.030052 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.030548 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.030636 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.054317 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9fz6l\" (UniqueName: \"kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l\") pod \"community-operators-946jp\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.169854 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:04:58 crc kubenswrapper[4958]: I1206 06:04:58.679785 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:04:59 crc kubenswrapper[4958]: I1206 06:04:59.597240 4958 generic.go:334] "Generic (PLEG): container finished" podID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerID="97d02f8efa70bbab88d43e64418cc52878b0f90667a413f813e157ad4ca18a20" exitCode=0 Dec 06 06:04:59 crc kubenswrapper[4958]: I1206 06:04:59.597576 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerDied","Data":"97d02f8efa70bbab88d43e64418cc52878b0f90667a413f813e157ad4ca18a20"} Dec 06 06:04:59 crc kubenswrapper[4958]: I1206 06:04:59.597612 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerStarted","Data":"622fe3437c1a09a492419ca6465e02d35b176ec75cc7a6deb1fd8f4d53913ea1"} Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.205430 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.208210 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.217590 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.300184 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.300278 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.300359 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nm8f\" (UniqueName: \"kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.401610 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.401713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nm8f\" (UniqueName: \"kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.401789 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.402218 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.402562 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.421715 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nm8f\" (UniqueName: \"kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f\") pod \"redhat-operators-h4pk9\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.579421 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:00 crc kubenswrapper[4958]: I1206 06:05:00.857954 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:01 crc kubenswrapper[4958]: I1206 06:05:01.615652 4958 generic.go:334] "Generic (PLEG): container finished" podID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerID="b51fd7a369e46aa689fd745c2cba668443b0665886ae4cda8eb196cab3c6c6ee" exitCode=0 Dec 06 06:05:01 crc kubenswrapper[4958]: I1206 06:05:01.615710 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerDied","Data":"b51fd7a369e46aa689fd745c2cba668443b0665886ae4cda8eb196cab3c6c6ee"} Dec 06 06:05:01 crc kubenswrapper[4958]: I1206 06:05:01.616026 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerStarted","Data":"97fb70108c0f4be40cb14a6fae4dd7274e359f2425e15eed9ca7a7b3cc7f9729"} Dec 06 06:05:01 crc kubenswrapper[4958]: I1206 06:05:01.619337 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerStarted","Data":"9a0a6af9be4cb6e1c06ba2f2d9e81464de07e46e03e5cc3ccc66e7a80445b3d3"} Dec 06 06:05:05 crc kubenswrapper[4958]: I1206 06:05:05.658054 4958 generic.go:334] "Generic (PLEG): container finished" podID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerID="9a0a6af9be4cb6e1c06ba2f2d9e81464de07e46e03e5cc3ccc66e7a80445b3d3" exitCode=0 Dec 06 06:05:05 crc kubenswrapper[4958]: I1206 06:05:05.658125 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerDied","Data":"9a0a6af9be4cb6e1c06ba2f2d9e81464de07e46e03e5cc3ccc66e7a80445b3d3"} Dec 06 06:05:06 crc kubenswrapper[4958]: I1206 06:05:06.668589 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerStarted","Data":"07b033b5dee5858a52d1191d2ee3fc6d6501c59fafda28d03709c6cd653160f3"} Dec 06 06:05:09 crc kubenswrapper[4958]: I1206 06:05:09.865860 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:05:09 crc kubenswrapper[4958]: I1206 06:05:09.866527 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:05:09 crc kubenswrapper[4958]: I1206 06:05:09.866594 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:05:09 crc kubenswrapper[4958]: I1206 06:05:09.867442 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:05:09 crc kubenswrapper[4958]: I1206 06:05:09.867593 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6" gracePeriod=600 Dec 06 06:05:10 crc kubenswrapper[4958]: E1206 06:05:10.688936 4958 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc13528c0_da5d_4d55_9155_2c29c33edfc4.slice/crio-conmon-0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6.scope\": RecentStats: unable to find data in memory cache]" Dec 06 06:05:10 crc kubenswrapper[4958]: I1206 06:05:10.709818 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6" exitCode=0 Dec 06 06:05:10 crc kubenswrapper[4958]: I1206 06:05:10.710182 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6"} Dec 06 06:05:10 crc kubenswrapper[4958]: I1206 06:05:10.710332 4958 scope.go:117] "RemoveContainer" containerID="3316d358950549dbdd244c3eebc5b98b6095cbbbc7ab60159a9410ea95b5b950" Dec 06 06:05:10 crc kubenswrapper[4958]: I1206 06:05:10.713011 4958 generic.go:334] "Generic (PLEG): container finished" podID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerID="07b033b5dee5858a52d1191d2ee3fc6d6501c59fafda28d03709c6cd653160f3" exitCode=0 Dec 06 06:05:10 crc kubenswrapper[4958]: I1206 06:05:10.713049 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerDied","Data":"07b033b5dee5858a52d1191d2ee3fc6d6501c59fafda28d03709c6cd653160f3"} Dec 06 06:05:11 crc kubenswrapper[4958]: I1206 06:05:11.729272 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9"} Dec 06 06:05:11 crc kubenswrapper[4958]: I1206 06:05:11.740155 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerStarted","Data":"09a039e37baff589e88599e574433bb78843a6dee6984b7fbf8e12b9aa990607"} Dec 06 06:05:11 crc kubenswrapper[4958]: I1206 06:05:11.809065 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-946jp" podStartSLOduration=3.8257283170000003 podStartE2EDuration="14.809039592s" podCreationTimestamp="2025-12-06 06:04:57 +0000 UTC" firstStartedPulling="2025-12-06 06:04:59.603738546 +0000 UTC m=+2210.137509309" lastFinishedPulling="2025-12-06 06:05:10.587049821 +0000 UTC m=+2221.120820584" observedRunningTime="2025-12-06 06:05:11.803099773 +0000 UTC m=+2222.336870536" watchObservedRunningTime="2025-12-06 06:05:11.809039592 +0000 UTC m=+2222.342810375" Dec 06 06:05:12 crc kubenswrapper[4958]: I1206 06:05:12.759246 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerStarted","Data":"487b1489c10726d815a2c95f062f6f364ebafebb5c840c5c10849c50d805ba45"} Dec 06 06:05:12 crc kubenswrapper[4958]: I1206 06:05:12.782392 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h4pk9" podStartSLOduration=2.975140131 podStartE2EDuration="12.78236828s" podCreationTimestamp="2025-12-06 06:05:00 +0000 UTC" firstStartedPulling="2025-12-06 06:05:01.618839722 +0000 UTC m=+2212.152610485" lastFinishedPulling="2025-12-06 06:05:11.426067871 +0000 UTC m=+2221.959838634" observedRunningTime="2025-12-06 06:05:12.776816101 +0000 UTC m=+2223.310586864" watchObservedRunningTime="2025-12-06 06:05:12.78236828 +0000 UTC m=+2223.316139043" Dec 06 06:05:18 crc kubenswrapper[4958]: I1206 06:05:18.170934 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:18 crc kubenswrapper[4958]: I1206 06:05:18.171509 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:18 crc kubenswrapper[4958]: I1206 06:05:18.225133 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:18 crc kubenswrapper[4958]: I1206 06:05:18.869535 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.580678 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.580758 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.642996 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.870800 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.871301 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-946jp" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="registry-server" containerID="cri-o://09a039e37baff589e88599e574433bb78843a6dee6984b7fbf8e12b9aa990607" gracePeriod=2 Dec 06 06:05:20 crc kubenswrapper[4958]: I1206 06:05:20.897988 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:21 crc kubenswrapper[4958]: I1206 06:05:21.860780 4958 generic.go:334] "Generic (PLEG): container finished" podID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerID="09a039e37baff589e88599e574433bb78843a6dee6984b7fbf8e12b9aa990607" exitCode=0 Dec 06 06:05:21 crc kubenswrapper[4958]: I1206 06:05:21.860922 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerDied","Data":"09a039e37baff589e88599e574433bb78843a6dee6984b7fbf8e12b9aa990607"} Dec 06 06:05:21 crc kubenswrapper[4958]: I1206 06:05:21.861539 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-946jp" event={"ID":"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b","Type":"ContainerDied","Data":"622fe3437c1a09a492419ca6465e02d35b176ec75cc7a6deb1fd8f4d53913ea1"} Dec 06 06:05:21 crc kubenswrapper[4958]: I1206 06:05:21.861566 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="622fe3437c1a09a492419ca6465e02d35b176ec75cc7a6deb1fd8f4d53913ea1" Dec 06 06:05:21 crc kubenswrapper[4958]: I1206 06:05:21.933056 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.052128 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9fz6l\" (UniqueName: \"kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l\") pod \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.052432 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities\") pod \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.052703 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content\") pod \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\" (UID: \"95c0dfa7-e4ce-4884-a265-3ed2eb1a358b\") " Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.052966 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities" (OuterVolumeSpecName: "utilities") pod "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" (UID: "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.053342 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.059671 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l" (OuterVolumeSpecName: "kube-api-access-9fz6l") pod "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" (UID: "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b"). InnerVolumeSpecName "kube-api-access-9fz6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.115420 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" (UID: "95c0dfa7-e4ce-4884-a265-3ed2eb1a358b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.155183 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.155218 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9fz6l\" (UniqueName: \"kubernetes.io/projected/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b-kube-api-access-9fz6l\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.869185 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-946jp" Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.902408 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:05:22 crc kubenswrapper[4958]: I1206 06:05:22.911116 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-946jp"] Dec 06 06:05:23 crc kubenswrapper[4958]: I1206 06:05:23.061299 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:23 crc kubenswrapper[4958]: I1206 06:05:23.061538 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h4pk9" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="registry-server" containerID="cri-o://487b1489c10726d815a2c95f062f6f364ebafebb5c840c5c10849c50d805ba45" gracePeriod=2 Dec 06 06:05:23 crc kubenswrapper[4958]: I1206 06:05:23.776337 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" path="/var/lib/kubelet/pods/95c0dfa7-e4ce-4884-a265-3ed2eb1a358b/volumes" Dec 06 06:05:23 crc kubenswrapper[4958]: I1206 06:05:23.881520 4958 generic.go:334] "Generic (PLEG): container finished" podID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerID="487b1489c10726d815a2c95f062f6f364ebafebb5c840c5c10849c50d805ba45" exitCode=0 Dec 06 06:05:23 crc kubenswrapper[4958]: I1206 06:05:23.881603 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerDied","Data":"487b1489c10726d815a2c95f062f6f364ebafebb5c840c5c10849c50d805ba45"} Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.431711 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.606939 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nm8f\" (UniqueName: \"kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f\") pod \"b893d196-ea0d-4c1c-83de-1346edcba4e8\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.606991 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities\") pod \"b893d196-ea0d-4c1c-83de-1346edcba4e8\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.607184 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content\") pod \"b893d196-ea0d-4c1c-83de-1346edcba4e8\" (UID: \"b893d196-ea0d-4c1c-83de-1346edcba4e8\") " Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.607943 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities" (OuterVolumeSpecName: "utilities") pod "b893d196-ea0d-4c1c-83de-1346edcba4e8" (UID: "b893d196-ea0d-4c1c-83de-1346edcba4e8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.615837 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f" (OuterVolumeSpecName: "kube-api-access-2nm8f") pod "b893d196-ea0d-4c1c-83de-1346edcba4e8" (UID: "b893d196-ea0d-4c1c-83de-1346edcba4e8"). InnerVolumeSpecName "kube-api-access-2nm8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.710516 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nm8f\" (UniqueName: \"kubernetes.io/projected/b893d196-ea0d-4c1c-83de-1346edcba4e8-kube-api-access-2nm8f\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.710854 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.722566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b893d196-ea0d-4c1c-83de-1346edcba4e8" (UID: "b893d196-ea0d-4c1c-83de-1346edcba4e8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.813163 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b893d196-ea0d-4c1c-83de-1346edcba4e8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.894393 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h4pk9" event={"ID":"b893d196-ea0d-4c1c-83de-1346edcba4e8","Type":"ContainerDied","Data":"97fb70108c0f4be40cb14a6fae4dd7274e359f2425e15eed9ca7a7b3cc7f9729"} Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.894459 4958 scope.go:117] "RemoveContainer" containerID="487b1489c10726d815a2c95f062f6f364ebafebb5c840c5c10849c50d805ba45" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.894456 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h4pk9" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.933174 4958 scope.go:117] "RemoveContainer" containerID="07b033b5dee5858a52d1191d2ee3fc6d6501c59fafda28d03709c6cd653160f3" Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.942010 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.951987 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h4pk9"] Dec 06 06:05:24 crc kubenswrapper[4958]: I1206 06:05:24.965197 4958 scope.go:117] "RemoveContainer" containerID="b51fd7a369e46aa689fd745c2cba668443b0665886ae4cda8eb196cab3c6c6ee" Dec 06 06:05:25 crc kubenswrapper[4958]: I1206 06:05:25.773647 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" path="/var/lib/kubelet/pods/b893d196-ea0d-4c1c-83de-1346edcba4e8/volumes" Dec 06 06:05:35 crc kubenswrapper[4958]: I1206 06:05:35.039815 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-hn59h"] Dec 06 06:05:35 crc kubenswrapper[4958]: I1206 06:05:35.049360 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-hn59h"] Dec 06 06:05:35 crc kubenswrapper[4958]: I1206 06:05:35.789314 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f28060d-c759-4b4b-a643-bf8acb76c1b2" path="/var/lib/kubelet/pods/7f28060d-c759-4b4b-a643-bf8acb76c1b2/volumes" Dec 06 06:05:48 crc kubenswrapper[4958]: I1206 06:05:48.049697 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-znqzx"] Dec 06 06:05:48 crc kubenswrapper[4958]: I1206 06:05:48.063410 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-znqzx"] Dec 06 06:05:49 crc kubenswrapper[4958]: I1206 06:05:49.784502 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6d712-4bb0-458b-878a-99dd8d47a8f9" path="/var/lib/kubelet/pods/94a6d712-4bb0-458b-878a-99dd8d47a8f9/volumes" Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.047715 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-x68cs"] Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.061131 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-m2lhs"] Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.072587 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-6hdqq"] Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.086797 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-x68cs"] Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.095302 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-6hdqq"] Dec 06 06:05:58 crc kubenswrapper[4958]: I1206 06:05:58.104009 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-m2lhs"] Dec 06 06:05:59 crc kubenswrapper[4958]: I1206 06:05:59.778619 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48ba018f-da06-4772-835f-257269ffed57" path="/var/lib/kubelet/pods/48ba018f-da06-4772-835f-257269ffed57/volumes" Dec 06 06:05:59 crc kubenswrapper[4958]: I1206 06:05:59.779204 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d9f5a2-d6d4-4390-965a-b5d29b134dc1" path="/var/lib/kubelet/pods/76d9f5a2-d6d4-4390-965a-b5d29b134dc1/volumes" Dec 06 06:05:59 crc kubenswrapper[4958]: I1206 06:05:59.779935 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e23f416-b2db-49d9-8a31-f68d32ff9b51" path="/var/lib/kubelet/pods/8e23f416-b2db-49d9-8a31-f68d32ff9b51/volumes" Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.029276 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7249-account-create-update-bw52q"] Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.037750 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-02e5-account-create-update-c75bh"] Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.047421 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-d745-account-create-update-nn9n6"] Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.055744 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-d745-account-create-update-nn9n6"] Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.064883 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7249-account-create-update-bw52q"] Dec 06 06:06:00 crc kubenswrapper[4958]: I1206 06:06:00.072600 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-02e5-account-create-update-c75bh"] Dec 06 06:06:01 crc kubenswrapper[4958]: I1206 06:06:01.773867 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a9cb1a1-4312-4bbf-b731-de283fd78834" path="/var/lib/kubelet/pods/4a9cb1a1-4312-4bbf-b731-de283fd78834/volumes" Dec 06 06:06:01 crc kubenswrapper[4958]: I1206 06:06:01.774642 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b06b73-d14d-47de-84f1-feae2b3a1c9d" path="/var/lib/kubelet/pods/51b06b73-d14d-47de-84f1-feae2b3a1c9d/volumes" Dec 06 06:06:01 crc kubenswrapper[4958]: I1206 06:06:01.775295 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab" path="/var/lib/kubelet/pods/e1602702-6e01-4d7b-ba0e-dc3dcc22b6ab/volumes" Dec 06 06:06:06 crc kubenswrapper[4958]: I1206 06:06:06.028756 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2scr7"] Dec 06 06:06:06 crc kubenswrapper[4958]: I1206 06:06:06.040796 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2scr7"] Dec 06 06:06:07 crc kubenswrapper[4958]: I1206 06:06:07.776231 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00f464ea-7983-4ab2-b2b1-07bf67c76e31" path="/var/lib/kubelet/pods/00f464ea-7983-4ab2-b2b1-07bf67c76e31/volumes" Dec 06 06:06:08 crc kubenswrapper[4958]: I1206 06:06:08.035280 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-lmfxp"] Dec 06 06:06:08 crc kubenswrapper[4958]: I1206 06:06:08.046126 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-lmfxp"] Dec 06 06:06:09 crc kubenswrapper[4958]: I1206 06:06:09.774847 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee2c3d7-24fe-4966-878b-90147b8f5cfb" path="/var/lib/kubelet/pods/fee2c3d7-24fe-4966-878b-90147b8f5cfb/volumes" Dec 06 06:06:28 crc kubenswrapper[4958]: I1206 06:06:28.998564 4958 scope.go:117] "RemoveContainer" containerID="bac26e11f92b1a749db0c8aaeb3924a7ddc85e6689f47a46dedc5e75fd102321" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.029162 4958 scope.go:117] "RemoveContainer" containerID="7d2b63b1bf05453d97ce26f361e83bb04e992444900d0ab59480d44b3a6e6148" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.108425 4958 scope.go:117] "RemoveContainer" containerID="9e2f9575aab360650618540ed4ad74b301812a33fa9f5ed553ac84c13b344270" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.159803 4958 scope.go:117] "RemoveContainer" containerID="f1339a33b0b4b7caede7fbe80382f194526f94dc0db2eaf4bfc6d2f79edf19ca" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.203907 4958 scope.go:117] "RemoveContainer" containerID="12492a359b84a81d1d70381059e12ee74b7ca073c393f8a2dd0c93b82fed3506" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.252645 4958 scope.go:117] "RemoveContainer" containerID="ac3b752366f04d66b5e3823c17b5bd6eba379760acf58e23ff30951e5465504c" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.316899 4958 scope.go:117] "RemoveContainer" containerID="faf28f5f435a469ec7a4f687e61fd7c193ea9907cb3499c1d0d6a9849c2d47fc" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.338943 4958 scope.go:117] "RemoveContainer" containerID="0ee33f13d8a587dadbec3566136aeffe22163de9bdcb7be217d5121e270b9643" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.380401 4958 scope.go:117] "RemoveContainer" containerID="25420bf258570f8bfa0e1683097b68eb745bc3426ee0c7a15ed415e9b11b16a6" Dec 06 06:06:29 crc kubenswrapper[4958]: I1206 06:06:29.424354 4958 scope.go:117] "RemoveContainer" containerID="cddddc08f5e105cd63270d231de952de1e1f69bce1a7d233bef7a08f7d99bec4" Dec 06 06:06:49 crc kubenswrapper[4958]: I1206 06:06:49.046782 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tw6sr"] Dec 06 06:06:49 crc kubenswrapper[4958]: I1206 06:06:49.058507 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-tw6sr"] Dec 06 06:06:49 crc kubenswrapper[4958]: I1206 06:06:49.777567 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d874f46b-0e0f-4304-8b7d-43a68d87dd5d" path="/var/lib/kubelet/pods/d874f46b-0e0f-4304-8b7d-43a68d87dd5d/volumes" Dec 06 06:06:58 crc kubenswrapper[4958]: I1206 06:06:58.866534 4958 generic.go:334] "Generic (PLEG): container finished" podID="4f84a870-b9e0-49e4-847b-71322d38a901" containerID="0fea946b1b5b41b3c564f15b2a2afc871613989f9f84edb26ea3be33acc22df4" exitCode=0 Dec 06 06:06:58 crc kubenswrapper[4958]: I1206 06:06:58.866594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" event={"ID":"4f84a870-b9e0-49e4-847b-71322d38a901","Type":"ContainerDied","Data":"0fea946b1b5b41b3c564f15b2a2afc871613989f9f84edb26ea3be33acc22df4"} Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.292053 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.388144 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key\") pod \"4f84a870-b9e0-49e4-847b-71322d38a901\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.388251 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory\") pod \"4f84a870-b9e0-49e4-847b-71322d38a901\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.388343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftxpp\" (UniqueName: \"kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp\") pod \"4f84a870-b9e0-49e4-847b-71322d38a901\" (UID: \"4f84a870-b9e0-49e4-847b-71322d38a901\") " Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.397312 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp" (OuterVolumeSpecName: "kube-api-access-ftxpp") pod "4f84a870-b9e0-49e4-847b-71322d38a901" (UID: "4f84a870-b9e0-49e4-847b-71322d38a901"). InnerVolumeSpecName "kube-api-access-ftxpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.422268 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4f84a870-b9e0-49e4-847b-71322d38a901" (UID: "4f84a870-b9e0-49e4-847b-71322d38a901"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.422296 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory" (OuterVolumeSpecName: "inventory") pod "4f84a870-b9e0-49e4-847b-71322d38a901" (UID: "4f84a870-b9e0-49e4-847b-71322d38a901"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.490232 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.490265 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f84a870-b9e0-49e4-847b-71322d38a901-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.490275 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ftxpp\" (UniqueName: \"kubernetes.io/projected/4f84a870-b9e0-49e4-847b-71322d38a901-kube-api-access-ftxpp\") on node \"crc\" DevicePath \"\"" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.887363 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" event={"ID":"4f84a870-b9e0-49e4-847b-71322d38a901","Type":"ContainerDied","Data":"c3edf76cf007c484b70e532626abe354da6f339578b82694dd098536ed0dea19"} Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.887413 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3edf76cf007c484b70e532626abe354da6f339578b82694dd098536ed0dea19" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.887410 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.976369 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv"] Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977075 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="extract-content" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977094 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="extract-content" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977113 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f84a870-b9e0-49e4-847b-71322d38a901" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977121 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f84a870-b9e0-49e4-847b-71322d38a901" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977133 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="extract-utilities" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977139 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="extract-utilities" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977150 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977158 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977171 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="extract-utilities" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977177 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="extract-utilities" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977186 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977192 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: E1206 06:07:00.977219 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="extract-content" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977226 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="extract-content" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977387 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="95c0dfa7-e4ce-4884-a265-3ed2eb1a358b" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977407 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f84a870-b9e0-49e4-847b-71322d38a901" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.977416 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b893d196-ea0d-4c1c-83de-1346edcba4e8" containerName="registry-server" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.978197 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.980343 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.980598 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.980863 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.981624 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:07:00 crc kubenswrapper[4958]: I1206 06:07:00.991079 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv"] Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.102735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfb95\" (UniqueName: \"kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.102903 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.102980 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.205588 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfb95\" (UniqueName: \"kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.205829 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.205910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.211938 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.220151 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.234527 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfb95\" (UniqueName: \"kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.303420 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.832583 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv"] Dec 06 06:07:01 crc kubenswrapper[4958]: W1206 06:07:01.835305 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4bea02da_8c2c_4d14_88cc_7228998df134.slice/crio-2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337 WatchSource:0}: Error finding container 2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337: Status 404 returned error can't find the container with id 2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337 Dec 06 06:07:01 crc kubenswrapper[4958]: I1206 06:07:01.895955 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" event={"ID":"4bea02da-8c2c-4d14-88cc-7228998df134","Type":"ContainerStarted","Data":"2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337"} Dec 06 06:07:02 crc kubenswrapper[4958]: I1206 06:07:02.906887 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" event={"ID":"4bea02da-8c2c-4d14-88cc-7228998df134","Type":"ContainerStarted","Data":"17c6c318fbb7c66dccf6194f88cb14bca6aaf4bfaecfd4ca64b1bf78f8bdfce7"} Dec 06 06:07:02 crc kubenswrapper[4958]: I1206 06:07:02.927253 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" podStartSLOduration=2.415379077 podStartE2EDuration="2.927228933s" podCreationTimestamp="2025-12-06 06:07:00 +0000 UTC" firstStartedPulling="2025-12-06 06:07:01.841605907 +0000 UTC m=+2332.375376670" lastFinishedPulling="2025-12-06 06:07:02.353455763 +0000 UTC m=+2332.887226526" observedRunningTime="2025-12-06 06:07:02.921489969 +0000 UTC m=+2333.455260742" watchObservedRunningTime="2025-12-06 06:07:02.927228933 +0000 UTC m=+2333.460999696" Dec 06 06:07:15 crc kubenswrapper[4958]: I1206 06:07:15.039347 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-8wfh5"] Dec 06 06:07:15 crc kubenswrapper[4958]: I1206 06:07:15.051824 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-8wfh5"] Dec 06 06:07:15 crc kubenswrapper[4958]: I1206 06:07:15.774791 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="35bc6ed4-e3c1-4b61-b522-3029d77819a9" path="/var/lib/kubelet/pods/35bc6ed4-e3c1-4b61-b522-3029d77819a9/volumes" Dec 06 06:07:19 crc kubenswrapper[4958]: I1206 06:07:19.040857 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w6tsn"] Dec 06 06:07:19 crc kubenswrapper[4958]: I1206 06:07:19.051097 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-w6tsn"] Dec 06 06:07:19 crc kubenswrapper[4958]: I1206 06:07:19.774706 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0799b23-7b1b-4c6e-a99a-da256942169c" path="/var/lib/kubelet/pods/b0799b23-7b1b-4c6e-a99a-da256942169c/volumes" Dec 06 06:07:29 crc kubenswrapper[4958]: I1206 06:07:29.660905 4958 scope.go:117] "RemoveContainer" containerID="3b96a2d2d369a5a260a8c43cd4fa12019837af83e7d1a3b1c4faee246b559751" Dec 06 06:07:29 crc kubenswrapper[4958]: I1206 06:07:29.715057 4958 scope.go:117] "RemoveContainer" containerID="e5ad34d9784c2aa5962bb866d7483252462c0ee0b5993c203c97b6587e79b2b2" Dec 06 06:07:29 crc kubenswrapper[4958]: I1206 06:07:29.774448 4958 scope.go:117] "RemoveContainer" containerID="26d5a7814d44a883dcb479f1d33f7413a1cb6ffc6d9050d139f049035eec8ee5" Dec 06 06:07:39 crc kubenswrapper[4958]: I1206 06:07:39.866494 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:07:39 crc kubenswrapper[4958]: I1206 06:07:39.867744 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:07:59 crc kubenswrapper[4958]: I1206 06:07:59.039595 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fdxd4"] Dec 06 06:07:59 crc kubenswrapper[4958]: I1206 06:07:59.050968 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fdxd4"] Dec 06 06:07:59 crc kubenswrapper[4958]: I1206 06:07:59.774011 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="379f6203-0ef8-471d-ba4a-d5ce940c313b" path="/var/lib/kubelet/pods/379f6203-0ef8-471d-ba4a-d5ce940c313b/volumes" Dec 06 06:08:09 crc kubenswrapper[4958]: I1206 06:08:09.866439 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:08:09 crc kubenswrapper[4958]: I1206 06:08:09.867141 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:08:20 crc kubenswrapper[4958]: I1206 06:08:20.636440 4958 generic.go:334] "Generic (PLEG): container finished" podID="4bea02da-8c2c-4d14-88cc-7228998df134" containerID="17c6c318fbb7c66dccf6194f88cb14bca6aaf4bfaecfd4ca64b1bf78f8bdfce7" exitCode=0 Dec 06 06:08:20 crc kubenswrapper[4958]: I1206 06:08:20.636531 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" event={"ID":"4bea02da-8c2c-4d14-88cc-7228998df134","Type":"ContainerDied","Data":"17c6c318fbb7c66dccf6194f88cb14bca6aaf4bfaecfd4ca64b1bf78f8bdfce7"} Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.088454 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.211676 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key\") pod \"4bea02da-8c2c-4d14-88cc-7228998df134\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.211781 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfb95\" (UniqueName: \"kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95\") pod \"4bea02da-8c2c-4d14-88cc-7228998df134\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.211843 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory\") pod \"4bea02da-8c2c-4d14-88cc-7228998df134\" (UID: \"4bea02da-8c2c-4d14-88cc-7228998df134\") " Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.220132 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95" (OuterVolumeSpecName: "kube-api-access-pfb95") pod "4bea02da-8c2c-4d14-88cc-7228998df134" (UID: "4bea02da-8c2c-4d14-88cc-7228998df134"). InnerVolumeSpecName "kube-api-access-pfb95". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.246015 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "4bea02da-8c2c-4d14-88cc-7228998df134" (UID: "4bea02da-8c2c-4d14-88cc-7228998df134"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.251095 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory" (OuterVolumeSpecName: "inventory") pod "4bea02da-8c2c-4d14-88cc-7228998df134" (UID: "4bea02da-8c2c-4d14-88cc-7228998df134"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.314360 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.314391 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/4bea02da-8c2c-4d14-88cc-7228998df134-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.314401 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfb95\" (UniqueName: \"kubernetes.io/projected/4bea02da-8c2c-4d14-88cc-7228998df134-kube-api-access-pfb95\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.656523 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" event={"ID":"4bea02da-8c2c-4d14-88cc-7228998df134","Type":"ContainerDied","Data":"2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337"} Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.656583 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.656589 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ea031cef96a3b72b066a1ac6635b838f63b564552e3a2d80a3f5796b8a24337" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.752386 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb"] Dec 06 06:08:22 crc kubenswrapper[4958]: E1206 06:08:22.752895 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bea02da-8c2c-4d14-88cc-7228998df134" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.752920 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bea02da-8c2c-4d14-88cc-7228998df134" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.753152 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bea02da-8c2c-4d14-88cc-7228998df134" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.753985 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.756325 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.756614 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.756795 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.756922 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.764923 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb"] Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.925779 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj88z\" (UniqueName: \"kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.925922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:22 crc kubenswrapper[4958]: I1206 06:08:22.925978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.027895 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj88z\" (UniqueName: \"kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.028214 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.028273 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.034038 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.035605 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.049638 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj88z\" (UniqueName: \"kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.077569 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.618533 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb"] Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.626201 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:08:23 crc kubenswrapper[4958]: I1206 06:08:23.670936 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" event={"ID":"99265c09-ff81-45cd-ae5e-501f1b7bfe69","Type":"ContainerStarted","Data":"076b96d9174d4a54a70b36e869b18e6a8036da10e815f9de3b609727737c20f2"} Dec 06 06:08:25 crc kubenswrapper[4958]: I1206 06:08:25.711452 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" event={"ID":"99265c09-ff81-45cd-ae5e-501f1b7bfe69","Type":"ContainerStarted","Data":"4f9a09a08f98e85590cfff0f719b1e75469fb4cd4b11d1a3b3876058ef99fb05"} Dec 06 06:08:25 crc kubenswrapper[4958]: I1206 06:08:25.733347 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" podStartSLOduration=2.502431125 podStartE2EDuration="3.733326688s" podCreationTimestamp="2025-12-06 06:08:22 +0000 UTC" firstStartedPulling="2025-12-06 06:08:23.625962483 +0000 UTC m=+2414.159733246" lastFinishedPulling="2025-12-06 06:08:24.856858046 +0000 UTC m=+2415.390628809" observedRunningTime="2025-12-06 06:08:25.729729012 +0000 UTC m=+2416.263499785" watchObservedRunningTime="2025-12-06 06:08:25.733326688 +0000 UTC m=+2416.267097451" Dec 06 06:08:29 crc kubenswrapper[4958]: I1206 06:08:29.873215 4958 scope.go:117] "RemoveContainer" containerID="b10fce3af1ec85dcbfb986a57988eee4168485600cd3036d6031c29e56a6c190" Dec 06 06:08:35 crc kubenswrapper[4958]: I1206 06:08:35.801213 4958 generic.go:334] "Generic (PLEG): container finished" podID="99265c09-ff81-45cd-ae5e-501f1b7bfe69" containerID="4f9a09a08f98e85590cfff0f719b1e75469fb4cd4b11d1a3b3876058ef99fb05" exitCode=0 Dec 06 06:08:35 crc kubenswrapper[4958]: I1206 06:08:35.801308 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" event={"ID":"99265c09-ff81-45cd-ae5e-501f1b7bfe69","Type":"ContainerDied","Data":"4f9a09a08f98e85590cfff0f719b1e75469fb4cd4b11d1a3b3876058ef99fb05"} Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.328619 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.436860 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj88z\" (UniqueName: \"kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z\") pod \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.437291 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory\") pod \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.437503 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key\") pod \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\" (UID: \"99265c09-ff81-45cd-ae5e-501f1b7bfe69\") " Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.443793 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z" (OuterVolumeSpecName: "kube-api-access-fj88z") pod "99265c09-ff81-45cd-ae5e-501f1b7bfe69" (UID: "99265c09-ff81-45cd-ae5e-501f1b7bfe69"). InnerVolumeSpecName "kube-api-access-fj88z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.471719 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "99265c09-ff81-45cd-ae5e-501f1b7bfe69" (UID: "99265c09-ff81-45cd-ae5e-501f1b7bfe69"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.479631 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory" (OuterVolumeSpecName: "inventory") pod "99265c09-ff81-45cd-ae5e-501f1b7bfe69" (UID: "99265c09-ff81-45cd-ae5e-501f1b7bfe69"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.540095 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.540150 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/99265c09-ff81-45cd-ae5e-501f1b7bfe69-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.540169 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fj88z\" (UniqueName: \"kubernetes.io/projected/99265c09-ff81-45cd-ae5e-501f1b7bfe69-kube-api-access-fj88z\") on node \"crc\" DevicePath \"\"" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.819784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" event={"ID":"99265c09-ff81-45cd-ae5e-501f1b7bfe69","Type":"ContainerDied","Data":"076b96d9174d4a54a70b36e869b18e6a8036da10e815f9de3b609727737c20f2"} Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.819823 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="076b96d9174d4a54a70b36e869b18e6a8036da10e815f9de3b609727737c20f2" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.819918 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.918750 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5"] Dec 06 06:08:37 crc kubenswrapper[4958]: E1206 06:08:37.919277 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99265c09-ff81-45cd-ae5e-501f1b7bfe69" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.919303 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="99265c09-ff81-45cd-ae5e-501f1b7bfe69" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.919572 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="99265c09-ff81-45cd-ae5e-501f1b7bfe69" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.920322 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.940322 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.940390 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.940611 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.940613 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:08:37 crc kubenswrapper[4958]: I1206 06:08:37.954892 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5"] Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.052209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.052365 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2zr9\" (UniqueName: \"kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.052408 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.154004 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.154138 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q2zr9\" (UniqueName: \"kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.154169 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.158750 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.158890 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.173443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q2zr9\" (UniqueName: \"kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w6fh5\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.259760 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.794238 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5"] Dec 06 06:08:38 crc kubenswrapper[4958]: W1206 06:08:38.796865 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0abbc0a3_1277_4973_819c_d474acd69ee3.slice/crio-d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f WatchSource:0}: Error finding container d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f: Status 404 returned error can't find the container with id d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f Dec 06 06:08:38 crc kubenswrapper[4958]: I1206 06:08:38.829981 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" event={"ID":"0abbc0a3-1277-4973-819c-d474acd69ee3","Type":"ContainerStarted","Data":"d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f"} Dec 06 06:08:39 crc kubenswrapper[4958]: I1206 06:08:39.866456 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:08:39 crc kubenswrapper[4958]: I1206 06:08:39.866826 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:08:39 crc kubenswrapper[4958]: I1206 06:08:39.866870 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:08:39 crc kubenswrapper[4958]: I1206 06:08:39.867591 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:08:39 crc kubenswrapper[4958]: I1206 06:08:39.867638 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" gracePeriod=600 Dec 06 06:08:40 crc kubenswrapper[4958]: E1206 06:08:40.498209 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.849398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" event={"ID":"0abbc0a3-1277-4973-819c-d474acd69ee3","Type":"ContainerStarted","Data":"24b23896ab9f795db0202537cbe7334774c47a8c7c44ff03d22b150b2eaa4324"} Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.852762 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" exitCode=0 Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.852942 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9"} Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.853072 4958 scope.go:117] "RemoveContainer" containerID="0b8808661711b2a47f465c2efb584cb62903970706f647cb73f1aa813708baf6" Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.853840 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:08:40 crc kubenswrapper[4958]: E1206 06:08:40.854167 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:08:40 crc kubenswrapper[4958]: I1206 06:08:40.874998 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" podStartSLOduration=2.184287614 podStartE2EDuration="3.874979788s" podCreationTimestamp="2025-12-06 06:08:37 +0000 UTC" firstStartedPulling="2025-12-06 06:08:38.799531459 +0000 UTC m=+2429.333302222" lastFinishedPulling="2025-12-06 06:08:40.490223633 +0000 UTC m=+2431.023994396" observedRunningTime="2025-12-06 06:08:40.866665485 +0000 UTC m=+2431.400436248" watchObservedRunningTime="2025-12-06 06:08:40.874979788 +0000 UTC m=+2431.408750551" Dec 06 06:08:51 crc kubenswrapper[4958]: I1206 06:08:51.762295 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:08:51 crc kubenswrapper[4958]: E1206 06:08:51.763510 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:09:04 crc kubenswrapper[4958]: I1206 06:09:04.761729 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:09:04 crc kubenswrapper[4958]: E1206 06:09:04.764897 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:09:17 crc kubenswrapper[4958]: I1206 06:09:17.761544 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:09:17 crc kubenswrapper[4958]: E1206 06:09:17.762255 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:09:21 crc kubenswrapper[4958]: I1206 06:09:21.237936 4958 generic.go:334] "Generic (PLEG): container finished" podID="0abbc0a3-1277-4973-819c-d474acd69ee3" containerID="24b23896ab9f795db0202537cbe7334774c47a8c7c44ff03d22b150b2eaa4324" exitCode=0 Dec 06 06:09:21 crc kubenswrapper[4958]: I1206 06:09:21.238018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" event={"ID":"0abbc0a3-1277-4973-819c-d474acd69ee3","Type":"ContainerDied","Data":"24b23896ab9f795db0202537cbe7334774c47a8c7c44ff03d22b150b2eaa4324"} Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.669379 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.759973 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory\") pod \"0abbc0a3-1277-4973-819c-d474acd69ee3\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.760322 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key\") pod \"0abbc0a3-1277-4973-819c-d474acd69ee3\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.760399 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2zr9\" (UniqueName: \"kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9\") pod \"0abbc0a3-1277-4973-819c-d474acd69ee3\" (UID: \"0abbc0a3-1277-4973-819c-d474acd69ee3\") " Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.771320 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9" (OuterVolumeSpecName: "kube-api-access-q2zr9") pod "0abbc0a3-1277-4973-819c-d474acd69ee3" (UID: "0abbc0a3-1277-4973-819c-d474acd69ee3"). InnerVolumeSpecName "kube-api-access-q2zr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.804807 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory" (OuterVolumeSpecName: "inventory") pod "0abbc0a3-1277-4973-819c-d474acd69ee3" (UID: "0abbc0a3-1277-4973-819c-d474acd69ee3"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.807959 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0abbc0a3-1277-4973-819c-d474acd69ee3" (UID: "0abbc0a3-1277-4973-819c-d474acd69ee3"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.863335 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.863365 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0abbc0a3-1277-4973-819c-d474acd69ee3-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:09:22 crc kubenswrapper[4958]: I1206 06:09:22.863374 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q2zr9\" (UniqueName: \"kubernetes.io/projected/0abbc0a3-1277-4973-819c-d474acd69ee3-kube-api-access-q2zr9\") on node \"crc\" DevicePath \"\"" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.266638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" event={"ID":"0abbc0a3-1277-4973-819c-d474acd69ee3","Type":"ContainerDied","Data":"d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f"} Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.266696 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d030ba9dbf24beda3f17f4051709f41da2d12cb4d8f1ba16e446f7cefe36823f" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.266800 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w6fh5" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.345907 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw"] Dec 06 06:09:23 crc kubenswrapper[4958]: E1206 06:09:23.346463 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0abbc0a3-1277-4973-819c-d474acd69ee3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.346501 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="0abbc0a3-1277-4973-819c-d474acd69ee3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.346759 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="0abbc0a3-1277-4973-819c-d474acd69ee3" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.347704 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.350907 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.351104 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.351122 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.351156 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.360933 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw"] Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.482065 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.482259 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kd5jm\" (UniqueName: \"kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.482399 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.584641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.584707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kd5jm\" (UniqueName: \"kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.584753 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.590087 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.590331 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.610435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kd5jm\" (UniqueName: \"kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:23 crc kubenswrapper[4958]: I1206 06:09:23.665515 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:09:24 crc kubenswrapper[4958]: I1206 06:09:24.268878 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw"] Dec 06 06:09:24 crc kubenswrapper[4958]: W1206 06:09:24.270593 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2be5e85c_0c8e_479d_bd13_97d8504f980f.slice/crio-f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244 WatchSource:0}: Error finding container f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244: Status 404 returned error can't find the container with id f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244 Dec 06 06:09:25 crc kubenswrapper[4958]: I1206 06:09:25.286357 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" event={"ID":"2be5e85c-0c8e-479d-bd13-97d8504f980f","Type":"ContainerStarted","Data":"f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244"} Dec 06 06:09:26 crc kubenswrapper[4958]: I1206 06:09:26.304294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" event={"ID":"2be5e85c-0c8e-479d-bd13-97d8504f980f","Type":"ContainerStarted","Data":"31d404c7531acdc1c2d00d26805aec15babc8f04de6b970944d178c4564e10fd"} Dec 06 06:09:26 crc kubenswrapper[4958]: I1206 06:09:26.322136 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" podStartSLOduration=2.554657153 podStartE2EDuration="3.322118967s" podCreationTimestamp="2025-12-06 06:09:23 +0000 UTC" firstStartedPulling="2025-12-06 06:09:24.272771995 +0000 UTC m=+2474.806542758" lastFinishedPulling="2025-12-06 06:09:25.040233809 +0000 UTC m=+2475.574004572" observedRunningTime="2025-12-06 06:09:26.319827096 +0000 UTC m=+2476.853597869" watchObservedRunningTime="2025-12-06 06:09:26.322118967 +0000 UTC m=+2476.855889730" Dec 06 06:09:28 crc kubenswrapper[4958]: I1206 06:09:28.762308 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:09:28 crc kubenswrapper[4958]: E1206 06:09:28.762911 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:09:40 crc kubenswrapper[4958]: I1206 06:09:40.762063 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:09:40 crc kubenswrapper[4958]: E1206 06:09:40.763076 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:09:52 crc kubenswrapper[4958]: I1206 06:09:52.762340 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:09:52 crc kubenswrapper[4958]: E1206 06:09:52.763104 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:10:07 crc kubenswrapper[4958]: I1206 06:10:07.764581 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:10:07 crc kubenswrapper[4958]: E1206 06:10:07.765286 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:10:16 crc kubenswrapper[4958]: I1206 06:10:16.766439 4958 generic.go:334] "Generic (PLEG): container finished" podID="2be5e85c-0c8e-479d-bd13-97d8504f980f" containerID="31d404c7531acdc1c2d00d26805aec15babc8f04de6b970944d178c4564e10fd" exitCode=0 Dec 06 06:10:16 crc kubenswrapper[4958]: I1206 06:10:16.766542 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" event={"ID":"2be5e85c-0c8e-479d-bd13-97d8504f980f","Type":"ContainerDied","Data":"31d404c7531acdc1c2d00d26805aec15babc8f04de6b970944d178c4564e10fd"} Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.333001 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.429996 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kd5jm\" (UniqueName: \"kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm\") pod \"2be5e85c-0c8e-479d-bd13-97d8504f980f\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.430135 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory\") pod \"2be5e85c-0c8e-479d-bd13-97d8504f980f\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.430223 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key\") pod \"2be5e85c-0c8e-479d-bd13-97d8504f980f\" (UID: \"2be5e85c-0c8e-479d-bd13-97d8504f980f\") " Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.435485 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm" (OuterVolumeSpecName: "kube-api-access-kd5jm") pod "2be5e85c-0c8e-479d-bd13-97d8504f980f" (UID: "2be5e85c-0c8e-479d-bd13-97d8504f980f"). InnerVolumeSpecName "kube-api-access-kd5jm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.464164 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2be5e85c-0c8e-479d-bd13-97d8504f980f" (UID: "2be5e85c-0c8e-479d-bd13-97d8504f980f"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.466680 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory" (OuterVolumeSpecName: "inventory") pod "2be5e85c-0c8e-479d-bd13-97d8504f980f" (UID: "2be5e85c-0c8e-479d-bd13-97d8504f980f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.532645 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kd5jm\" (UniqueName: \"kubernetes.io/projected/2be5e85c-0c8e-479d-bd13-97d8504f980f-kube-api-access-kd5jm\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.532677 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.532685 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2be5e85c-0c8e-479d-bd13-97d8504f980f-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.833450 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" event={"ID":"2be5e85c-0c8e-479d-bd13-97d8504f980f","Type":"ContainerDied","Data":"f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244"} Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.833540 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.839616 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f84329257c55bef290208aef5928f0998f04e01e9d5630a64f55b82389907244" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.882705 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pmd7p"] Dec 06 06:10:18 crc kubenswrapper[4958]: E1206 06:10:18.883448 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2be5e85c-0c8e-479d-bd13-97d8504f980f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.883503 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2be5e85c-0c8e-479d-bd13-97d8504f980f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.883762 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2be5e85c-0c8e-479d-bd13-97d8504f980f" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.884655 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.891429 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.891654 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.891854 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.891960 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.906064 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pmd7p"] Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.940367 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.940439 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzb9x\" (UniqueName: \"kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:18 crc kubenswrapper[4958]: I1206 06:10:18.940480 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.041988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.042148 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.042212 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzb9x\" (UniqueName: \"kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.046738 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.047373 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.060982 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzb9x\" (UniqueName: \"kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x\") pod \"ssh-known-hosts-edpm-deployment-pmd7p\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.222550 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.731893 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pmd7p"] Dec 06 06:10:19 crc kubenswrapper[4958]: W1206 06:10:19.734998 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podebf03971_98ab_468e_ae6d_66f13a9ba5cc.slice/crio-ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf WatchSource:0}: Error finding container ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf: Status 404 returned error can't find the container with id ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf Dec 06 06:10:19 crc kubenswrapper[4958]: I1206 06:10:19.841082 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" event={"ID":"ebf03971-98ab-468e-ae6d-66f13a9ba5cc","Type":"ContainerStarted","Data":"ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf"} Dec 06 06:10:20 crc kubenswrapper[4958]: I1206 06:10:20.762407 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:10:20 crc kubenswrapper[4958]: E1206 06:10:20.763026 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:10:20 crc kubenswrapper[4958]: I1206 06:10:20.850695 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" event={"ID":"ebf03971-98ab-468e-ae6d-66f13a9ba5cc","Type":"ContainerStarted","Data":"08dff823e5e452e8f1dadf98c9f0bf3649412d2d678aff0f9b0fd986f6eff4d1"} Dec 06 06:10:20 crc kubenswrapper[4958]: I1206 06:10:20.869907 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" podStartSLOduration=2.296113201 podStartE2EDuration="2.86988894s" podCreationTimestamp="2025-12-06 06:10:18 +0000 UTC" firstStartedPulling="2025-12-06 06:10:19.737050587 +0000 UTC m=+2530.270821350" lastFinishedPulling="2025-12-06 06:10:20.310826306 +0000 UTC m=+2530.844597089" observedRunningTime="2025-12-06 06:10:20.864592777 +0000 UTC m=+2531.398363550" watchObservedRunningTime="2025-12-06 06:10:20.86988894 +0000 UTC m=+2531.403659703" Dec 06 06:10:27 crc kubenswrapper[4958]: I1206 06:10:27.922261 4958 generic.go:334] "Generic (PLEG): container finished" podID="ebf03971-98ab-468e-ae6d-66f13a9ba5cc" containerID="08dff823e5e452e8f1dadf98c9f0bf3649412d2d678aff0f9b0fd986f6eff4d1" exitCode=0 Dec 06 06:10:27 crc kubenswrapper[4958]: I1206 06:10:27.922359 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" event={"ID":"ebf03971-98ab-468e-ae6d-66f13a9ba5cc","Type":"ContainerDied","Data":"08dff823e5e452e8f1dadf98c9f0bf3649412d2d678aff0f9b0fd986f6eff4d1"} Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.388320 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.435686 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0\") pod \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.435754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam\") pod \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.435869 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzb9x\" (UniqueName: \"kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x\") pod \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\" (UID: \"ebf03971-98ab-468e-ae6d-66f13a9ba5cc\") " Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.441761 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x" (OuterVolumeSpecName: "kube-api-access-vzb9x") pod "ebf03971-98ab-468e-ae6d-66f13a9ba5cc" (UID: "ebf03971-98ab-468e-ae6d-66f13a9ba5cc"). InnerVolumeSpecName "kube-api-access-vzb9x". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.466361 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ebf03971-98ab-468e-ae6d-66f13a9ba5cc" (UID: "ebf03971-98ab-468e-ae6d-66f13a9ba5cc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.488686 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "ebf03971-98ab-468e-ae6d-66f13a9ba5cc" (UID: "ebf03971-98ab-468e-ae6d-66f13a9ba5cc"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.538049 4958 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.538080 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.538092 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzb9x\" (UniqueName: \"kubernetes.io/projected/ebf03971-98ab-468e-ae6d-66f13a9ba5cc-kube-api-access-vzb9x\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.943988 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" event={"ID":"ebf03971-98ab-468e-ae6d-66f13a9ba5cc","Type":"ContainerDied","Data":"ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf"} Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.944025 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab412af6127752b268367b4497eda027517a2e04b5e4bdb0ce97d5a0604dcdaf" Dec 06 06:10:29 crc kubenswrapper[4958]: I1206 06:10:29.944030 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pmd7p" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.035081 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48"] Dec 06 06:10:30 crc kubenswrapper[4958]: E1206 06:10:30.035760 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ebf03971-98ab-468e-ae6d-66f13a9ba5cc" containerName="ssh-known-hosts-edpm-deployment" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.035795 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ebf03971-98ab-468e-ae6d-66f13a9ba5cc" containerName="ssh-known-hosts-edpm-deployment" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.036218 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ebf03971-98ab-468e-ae6d-66f13a9ba5cc" containerName="ssh-known-hosts-edpm-deployment" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.037297 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.039161 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.039340 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.039529 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.039568 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.044067 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48"] Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.148418 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.148519 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtdg8\" (UniqueName: \"kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.148556 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.249910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.250363 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtdg8\" (UniqueName: \"kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.250411 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.254152 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.254540 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.267304 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtdg8\" (UniqueName: \"kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-q2z48\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.357046 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.925715 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48"] Dec 06 06:10:30 crc kubenswrapper[4958]: I1206 06:10:30.956407 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" event={"ID":"21024c56-19d8-4bad-a676-cefec2f196a2","Type":"ContainerStarted","Data":"e47bbe26aadf04b970272472b54df63a71951a93b101a2d5d7d3c86bb850e354"} Dec 06 06:10:31 crc kubenswrapper[4958]: I1206 06:10:31.967166 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" event={"ID":"21024c56-19d8-4bad-a676-cefec2f196a2","Type":"ContainerStarted","Data":"9be62ba32ed65c1cc382ba73e82d8c8dea5b0d2038eed08bfcc1cbeb03bc3a24"} Dec 06 06:10:31 crc kubenswrapper[4958]: I1206 06:10:31.982367 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" podStartSLOduration=1.489127512 podStartE2EDuration="1.982351358s" podCreationTimestamp="2025-12-06 06:10:30 +0000 UTC" firstStartedPulling="2025-12-06 06:10:30.916178795 +0000 UTC m=+2541.449949558" lastFinishedPulling="2025-12-06 06:10:31.409402611 +0000 UTC m=+2541.943173404" observedRunningTime="2025-12-06 06:10:31.978452433 +0000 UTC m=+2542.512223206" watchObservedRunningTime="2025-12-06 06:10:31.982351358 +0000 UTC m=+2542.516122121" Dec 06 06:10:32 crc kubenswrapper[4958]: I1206 06:10:32.762267 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:10:32 crc kubenswrapper[4958]: E1206 06:10:32.762653 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:10:40 crc kubenswrapper[4958]: I1206 06:10:40.036238 4958 generic.go:334] "Generic (PLEG): container finished" podID="21024c56-19d8-4bad-a676-cefec2f196a2" containerID="9be62ba32ed65c1cc382ba73e82d8c8dea5b0d2038eed08bfcc1cbeb03bc3a24" exitCode=0 Dec 06 06:10:40 crc kubenswrapper[4958]: I1206 06:10:40.036313 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" event={"ID":"21024c56-19d8-4bad-a676-cefec2f196a2","Type":"ContainerDied","Data":"9be62ba32ed65c1cc382ba73e82d8c8dea5b0d2038eed08bfcc1cbeb03bc3a24"} Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.474092 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.578325 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory\") pod \"21024c56-19d8-4bad-a676-cefec2f196a2\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.578412 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key\") pod \"21024c56-19d8-4bad-a676-cefec2f196a2\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.578773 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtdg8\" (UniqueName: \"kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8\") pod \"21024c56-19d8-4bad-a676-cefec2f196a2\" (UID: \"21024c56-19d8-4bad-a676-cefec2f196a2\") " Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.586718 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8" (OuterVolumeSpecName: "kube-api-access-rtdg8") pod "21024c56-19d8-4bad-a676-cefec2f196a2" (UID: "21024c56-19d8-4bad-a676-cefec2f196a2"). InnerVolumeSpecName "kube-api-access-rtdg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.606836 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "21024c56-19d8-4bad-a676-cefec2f196a2" (UID: "21024c56-19d8-4bad-a676-cefec2f196a2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.622486 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory" (OuterVolumeSpecName: "inventory") pod "21024c56-19d8-4bad-a676-cefec2f196a2" (UID: "21024c56-19d8-4bad-a676-cefec2f196a2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.683363 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.683392 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/21024c56-19d8-4bad-a676-cefec2f196a2-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:41 crc kubenswrapper[4958]: I1206 06:10:41.683405 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtdg8\" (UniqueName: \"kubernetes.io/projected/21024c56-19d8-4bad-a676-cefec2f196a2-kube-api-access-rtdg8\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.062901 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" event={"ID":"21024c56-19d8-4bad-a676-cefec2f196a2","Type":"ContainerDied","Data":"e47bbe26aadf04b970272472b54df63a71951a93b101a2d5d7d3c86bb850e354"} Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.062957 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e47bbe26aadf04b970272472b54df63a71951a93b101a2d5d7d3c86bb850e354" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.062962 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-q2z48" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.148052 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf"] Dec 06 06:10:42 crc kubenswrapper[4958]: E1206 06:10:42.148562 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21024c56-19d8-4bad-a676-cefec2f196a2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.148587 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="21024c56-19d8-4bad-a676-cefec2f196a2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.148835 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="21024c56-19d8-4bad-a676-cefec2f196a2" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.149837 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.154959 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.154984 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.154959 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.155057 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.162073 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf"] Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.191907 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.192017 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjrb9\" (UniqueName: \"kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.192059 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.293545 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.293855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjrb9\" (UniqueName: \"kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.293888 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.297435 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.298955 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.307735 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjrb9\" (UniqueName: \"kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:42 crc kubenswrapper[4958]: I1206 06:10:42.470765 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:43 crc kubenswrapper[4958]: I1206 06:10:43.029553 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf"] Dec 06 06:10:43 crc kubenswrapper[4958]: I1206 06:10:43.075160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" event={"ID":"bd40c7a7-faba-4269-b1a5-13e691342c9a","Type":"ContainerStarted","Data":"2738f73d791a21e798a265399b7543c5f11fdb18d3254d36c3ced9b6fe2c4e06"} Dec 06 06:10:44 crc kubenswrapper[4958]: I1206 06:10:44.087556 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" event={"ID":"bd40c7a7-faba-4269-b1a5-13e691342c9a","Type":"ContainerStarted","Data":"fbdc113fa00f8a8cf76bfdebd03b8776549c7c259b4f0706445e0988beb0c257"} Dec 06 06:10:44 crc kubenswrapper[4958]: I1206 06:10:44.106272 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" podStartSLOduration=1.501428725 podStartE2EDuration="2.106247288s" podCreationTimestamp="2025-12-06 06:10:42 +0000 UTC" firstStartedPulling="2025-12-06 06:10:43.037747393 +0000 UTC m=+2553.571518157" lastFinishedPulling="2025-12-06 06:10:43.642565957 +0000 UTC m=+2554.176336720" observedRunningTime="2025-12-06 06:10:44.101947822 +0000 UTC m=+2554.635718585" watchObservedRunningTime="2025-12-06 06:10:44.106247288 +0000 UTC m=+2554.640018051" Dec 06 06:10:46 crc kubenswrapper[4958]: I1206 06:10:46.762461 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:10:46 crc kubenswrapper[4958]: E1206 06:10:46.763398 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:10:53 crc kubenswrapper[4958]: I1206 06:10:53.173180 4958 generic.go:334] "Generic (PLEG): container finished" podID="bd40c7a7-faba-4269-b1a5-13e691342c9a" containerID="fbdc113fa00f8a8cf76bfdebd03b8776549c7c259b4f0706445e0988beb0c257" exitCode=0 Dec 06 06:10:53 crc kubenswrapper[4958]: I1206 06:10:53.173271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" event={"ID":"bd40c7a7-faba-4269-b1a5-13e691342c9a","Type":"ContainerDied","Data":"fbdc113fa00f8a8cf76bfdebd03b8776549c7c259b4f0706445e0988beb0c257"} Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.638651 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.732683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key\") pod \"bd40c7a7-faba-4269-b1a5-13e691342c9a\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.732794 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory\") pod \"bd40c7a7-faba-4269-b1a5-13e691342c9a\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.732854 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjrb9\" (UniqueName: \"kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9\") pod \"bd40c7a7-faba-4269-b1a5-13e691342c9a\" (UID: \"bd40c7a7-faba-4269-b1a5-13e691342c9a\") " Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.740971 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9" (OuterVolumeSpecName: "kube-api-access-gjrb9") pod "bd40c7a7-faba-4269-b1a5-13e691342c9a" (UID: "bd40c7a7-faba-4269-b1a5-13e691342c9a"). InnerVolumeSpecName "kube-api-access-gjrb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.762640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory" (OuterVolumeSpecName: "inventory") pod "bd40c7a7-faba-4269-b1a5-13e691342c9a" (UID: "bd40c7a7-faba-4269-b1a5-13e691342c9a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.764886 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bd40c7a7-faba-4269-b1a5-13e691342c9a" (UID: "bd40c7a7-faba-4269-b1a5-13e691342c9a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.837418 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.837460 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjrb9\" (UniqueName: \"kubernetes.io/projected/bd40c7a7-faba-4269-b1a5-13e691342c9a-kube-api-access-gjrb9\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:54 crc kubenswrapper[4958]: I1206 06:10:54.837487 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd40c7a7-faba-4269-b1a5-13e691342c9a-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.200216 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" event={"ID":"bd40c7a7-faba-4269-b1a5-13e691342c9a","Type":"ContainerDied","Data":"2738f73d791a21e798a265399b7543c5f11fdb18d3254d36c3ced9b6fe2c4e06"} Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.200266 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2738f73d791a21e798a265399b7543c5f11fdb18d3254d36c3ced9b6fe2c4e06" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.200358 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.289961 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc"] Dec 06 06:10:55 crc kubenswrapper[4958]: E1206 06:10:55.290358 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd40c7a7-faba-4269-b1a5-13e691342c9a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.290372 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd40c7a7-faba-4269-b1a5-13e691342c9a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.290577 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd40c7a7-faba-4269-b1a5-13e691342c9a" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.291256 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.293583 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.293786 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.293837 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.293948 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.294961 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.295444 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.296046 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.297040 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.311921 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc"] Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451142 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451206 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451231 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451253 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451282 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451611 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451873 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.451942 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452016 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452329 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452577 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz2gh\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.452710 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554594 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554635 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554666 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554699 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554749 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554790 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554807 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554828 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554865 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554887 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554908 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554933 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz2gh\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554959 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.554992 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.559064 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.559940 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.560825 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.561143 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.562021 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.562914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.564811 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.564978 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.565298 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.565908 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.566703 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.568604 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.568994 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.577308 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz2gh\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-znbfc\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:55 crc kubenswrapper[4958]: I1206 06:10:55.651205 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:10:56 crc kubenswrapper[4958]: I1206 06:10:56.164792 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc"] Dec 06 06:10:56 crc kubenswrapper[4958]: I1206 06:10:56.208484 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" event={"ID":"727529de-528b-4f43-b581-5bfdfcdca081","Type":"ContainerStarted","Data":"4f33e3c38ae5d37eaf4ab6a6e4c2e7baa3dcaddf4eba2eb54c2b6dcbf9847d10"} Dec 06 06:10:57 crc kubenswrapper[4958]: I1206 06:10:57.225050 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" event={"ID":"727529de-528b-4f43-b581-5bfdfcdca081","Type":"ContainerStarted","Data":"7b8b7bc02f78c4236f1123b4f5ad1668ba5fc4884729049982c02e1a780453d8"} Dec 06 06:10:57 crc kubenswrapper[4958]: I1206 06:10:57.265863 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" podStartSLOduration=1.8119353519999999 podStartE2EDuration="2.26583873s" podCreationTimestamp="2025-12-06 06:10:55 +0000 UTC" firstStartedPulling="2025-12-06 06:10:56.167410141 +0000 UTC m=+2566.701180904" lastFinishedPulling="2025-12-06 06:10:56.621313479 +0000 UTC m=+2567.155084282" observedRunningTime="2025-12-06 06:10:57.257103585 +0000 UTC m=+2567.790874368" watchObservedRunningTime="2025-12-06 06:10:57.26583873 +0000 UTC m=+2567.799609503" Dec 06 06:10:58 crc kubenswrapper[4958]: I1206 06:10:58.761904 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:10:58 crc kubenswrapper[4958]: E1206 06:10:58.762560 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:11:10 crc kubenswrapper[4958]: I1206 06:11:10.761803 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:11:10 crc kubenswrapper[4958]: E1206 06:11:10.762689 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:11:22 crc kubenswrapper[4958]: I1206 06:11:22.762777 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:11:22 crc kubenswrapper[4958]: E1206 06:11:22.763895 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:11:30 crc kubenswrapper[4958]: I1206 06:11:30.033305 4958 scope.go:117] "RemoveContainer" containerID="97d02f8efa70bbab88d43e64418cc52878b0f90667a413f813e157ad4ca18a20" Dec 06 06:11:30 crc kubenswrapper[4958]: I1206 06:11:30.064163 4958 scope.go:117] "RemoveContainer" containerID="9a0a6af9be4cb6e1c06ba2f2d9e81464de07e46e03e5cc3ccc66e7a80445b3d3" Dec 06 06:11:30 crc kubenswrapper[4958]: I1206 06:11:30.109988 4958 scope.go:117] "RemoveContainer" containerID="09a039e37baff589e88599e574433bb78843a6dee6984b7fbf8e12b9aa990607" Dec 06 06:11:32 crc kubenswrapper[4958]: I1206 06:11:32.566661 4958 generic.go:334] "Generic (PLEG): container finished" podID="727529de-528b-4f43-b581-5bfdfcdca081" containerID="7b8b7bc02f78c4236f1123b4f5ad1668ba5fc4884729049982c02e1a780453d8" exitCode=0 Dec 06 06:11:32 crc kubenswrapper[4958]: I1206 06:11:32.566800 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" event={"ID":"727529de-528b-4f43-b581-5bfdfcdca081","Type":"ContainerDied","Data":"7b8b7bc02f78c4236f1123b4f5ad1668ba5fc4884729049982c02e1a780453d8"} Dec 06 06:11:33 crc kubenswrapper[4958]: I1206 06:11:33.954379 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023202 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023246 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023288 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023308 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023356 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023405 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023430 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023452 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023510 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023540 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023572 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz2gh\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023648 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.023675 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle\") pod \"727529de-528b-4f43-b581-5bfdfcdca081\" (UID: \"727529de-528b-4f43-b581-5bfdfcdca081\") " Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.030118 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh" (OuterVolumeSpecName: "kube-api-access-qz2gh") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "kube-api-access-qz2gh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032021 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032041 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032338 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032383 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032551 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032675 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.032739 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.034427 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.035423 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.035640 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.036067 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.056888 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory" (OuterVolumeSpecName: "inventory") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.058725 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "727529de-528b-4f43-b581-5bfdfcdca081" (UID: "727529de-528b-4f43-b581-5bfdfcdca081"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125613 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125640 4958 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125650 4958 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125659 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125669 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125678 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125689 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125699 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125710 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125719 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125727 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125736 4958 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125746 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/727529de-528b-4f43-b581-5bfdfcdca081-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.125754 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz2gh\" (UniqueName: \"kubernetes.io/projected/727529de-528b-4f43-b581-5bfdfcdca081-kube-api-access-qz2gh\") on node \"crc\" DevicePath \"\"" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.587528 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" event={"ID":"727529de-528b-4f43-b581-5bfdfcdca081","Type":"ContainerDied","Data":"4f33e3c38ae5d37eaf4ab6a6e4c2e7baa3dcaddf4eba2eb54c2b6dcbf9847d10"} Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.587915 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f33e3c38ae5d37eaf4ab6a6e4c2e7baa3dcaddf4eba2eb54c2b6dcbf9847d10" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.587623 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-znbfc" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.691915 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct"] Dec 06 06:11:34 crc kubenswrapper[4958]: E1206 06:11:34.692339 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="727529de-528b-4f43-b581-5bfdfcdca081" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.692357 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="727529de-528b-4f43-b581-5bfdfcdca081" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.692559 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="727529de-528b-4f43-b581-5bfdfcdca081" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.693323 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.694888 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.694891 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.695282 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.695430 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.695752 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.712710 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct"] Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.739885 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jg9s\" (UniqueName: \"kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.740011 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.740047 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.740179 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.740209 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.842572 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.842625 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.842827 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.842855 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.842964 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jg9s\" (UniqueName: \"kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.844726 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.848010 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.848301 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.856164 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:34 crc kubenswrapper[4958]: I1206 06:11:34.862428 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jg9s\" (UniqueName: \"kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-t4kct\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:35 crc kubenswrapper[4958]: I1206 06:11:35.012459 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:11:35 crc kubenswrapper[4958]: I1206 06:11:35.369243 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct"] Dec 06 06:11:35 crc kubenswrapper[4958]: I1206 06:11:35.596483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" event={"ID":"7f33cda0-d358-47d3-8f73-fac395b8b627","Type":"ContainerStarted","Data":"19f9b7da799e88e1f88ac41d55211a90d0b4976299167a41bda2a34973dddc78"} Dec 06 06:11:36 crc kubenswrapper[4958]: I1206 06:11:36.605132 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" event={"ID":"7f33cda0-d358-47d3-8f73-fac395b8b627","Type":"ContainerStarted","Data":"bf665a7f700fa5d802e859065dfbc07d4a2565a764bb12e411f299a45febd670"} Dec 06 06:11:36 crc kubenswrapper[4958]: I1206 06:11:36.624563 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" podStartSLOduration=2.193033314 podStartE2EDuration="2.62454203s" podCreationTimestamp="2025-12-06 06:11:34 +0000 UTC" firstStartedPulling="2025-12-06 06:11:35.369559785 +0000 UTC m=+2605.903330548" lastFinishedPulling="2025-12-06 06:11:35.801068501 +0000 UTC m=+2606.334839264" observedRunningTime="2025-12-06 06:11:36.617388708 +0000 UTC m=+2607.151159471" watchObservedRunningTime="2025-12-06 06:11:36.62454203 +0000 UTC m=+2607.158312793" Dec 06 06:11:36 crc kubenswrapper[4958]: I1206 06:11:36.762152 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:11:36 crc kubenswrapper[4958]: E1206 06:11:36.762388 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:11:49 crc kubenswrapper[4958]: I1206 06:11:49.776524 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:11:49 crc kubenswrapper[4958]: E1206 06:11:49.777294 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:12:03 crc kubenswrapper[4958]: I1206 06:12:03.762120 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:12:03 crc kubenswrapper[4958]: E1206 06:12:03.763012 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:12:16 crc kubenswrapper[4958]: I1206 06:12:16.762306 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:12:16 crc kubenswrapper[4958]: E1206 06:12:16.763093 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:12:29 crc kubenswrapper[4958]: I1206 06:12:29.769356 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:12:29 crc kubenswrapper[4958]: E1206 06:12:29.770143 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:12:39 crc kubenswrapper[4958]: I1206 06:12:39.206304 4958 generic.go:334] "Generic (PLEG): container finished" podID="7f33cda0-d358-47d3-8f73-fac395b8b627" containerID="bf665a7f700fa5d802e859065dfbc07d4a2565a764bb12e411f299a45febd670" exitCode=0 Dec 06 06:12:39 crc kubenswrapper[4958]: I1206 06:12:39.206388 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" event={"ID":"7f33cda0-d358-47d3-8f73-fac395b8b627","Type":"ContainerDied","Data":"bf665a7f700fa5d802e859065dfbc07d4a2565a764bb12e411f299a45febd670"} Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.641270 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.762443 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:12:40 crc kubenswrapper[4958]: E1206 06:12:40.762711 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.797459 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jg9s\" (UniqueName: \"kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s\") pod \"7f33cda0-d358-47d3-8f73-fac395b8b627\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.797602 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle\") pod \"7f33cda0-d358-47d3-8f73-fac395b8b627\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.797631 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0\") pod \"7f33cda0-d358-47d3-8f73-fac395b8b627\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.797724 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key\") pod \"7f33cda0-d358-47d3-8f73-fac395b8b627\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.797810 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory\") pod \"7f33cda0-d358-47d3-8f73-fac395b8b627\" (UID: \"7f33cda0-d358-47d3-8f73-fac395b8b627\") " Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.803463 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s" (OuterVolumeSpecName: "kube-api-access-6jg9s") pod "7f33cda0-d358-47d3-8f73-fac395b8b627" (UID: "7f33cda0-d358-47d3-8f73-fac395b8b627"). InnerVolumeSpecName "kube-api-access-6jg9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.803511 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7f33cda0-d358-47d3-8f73-fac395b8b627" (UID: "7f33cda0-d358-47d3-8f73-fac395b8b627"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.824316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7f33cda0-d358-47d3-8f73-fac395b8b627" (UID: "7f33cda0-d358-47d3-8f73-fac395b8b627"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.830860 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7f33cda0-d358-47d3-8f73-fac395b8b627" (UID: "7f33cda0-d358-47d3-8f73-fac395b8b627"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.841028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory" (OuterVolumeSpecName: "inventory") pod "7f33cda0-d358-47d3-8f73-fac395b8b627" (UID: "7f33cda0-d358-47d3-8f73-fac395b8b627"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.902322 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.902386 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jg9s\" (UniqueName: \"kubernetes.io/projected/7f33cda0-d358-47d3-8f73-fac395b8b627-kube-api-access-6jg9s\") on node \"crc\" DevicePath \"\"" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.902422 4958 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.902431 4958 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7f33cda0-d358-47d3-8f73-fac395b8b627-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:12:40 crc kubenswrapper[4958]: I1206 06:12:40.902439 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7f33cda0-d358-47d3-8f73-fac395b8b627-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.227674 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" event={"ID":"7f33cda0-d358-47d3-8f73-fac395b8b627","Type":"ContainerDied","Data":"19f9b7da799e88e1f88ac41d55211a90d0b4976299167a41bda2a34973dddc78"} Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.227712 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19f9b7da799e88e1f88ac41d55211a90d0b4976299167a41bda2a34973dddc78" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.227794 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-t4kct" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.335295 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7"] Dec 06 06:12:41 crc kubenswrapper[4958]: E1206 06:12:41.335999 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f33cda0-d358-47d3-8f73-fac395b8b627" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.336016 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f33cda0-d358-47d3-8f73-fac395b8b627" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.336252 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f33cda0-d358-47d3-8f73-fac395b8b627" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.337071 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.340697 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.340743 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.340913 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.347103 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7"] Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.358386 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.358518 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.358729 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.411614 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.411735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.411797 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.411884 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.412036 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.412066 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sljqh\" (UniqueName: \"kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.512993 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.513077 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.513101 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sljqh\" (UniqueName: \"kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.513142 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.513194 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.513231 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.524463 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.524914 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.525095 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.525342 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.525730 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.544767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sljqh\" (UniqueName: \"kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:41 crc kubenswrapper[4958]: I1206 06:12:41.664240 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:12:42 crc kubenswrapper[4958]: I1206 06:12:42.211087 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7"] Dec 06 06:12:42 crc kubenswrapper[4958]: I1206 06:12:42.238194 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" event={"ID":"64ab61d3-8a40-4d22-bae0-25f7dd034eda","Type":"ContainerStarted","Data":"2f9926003530c43ec71dc43ed1409938e679cb0f4d9d13042a0cceb48838b294"} Dec 06 06:12:43 crc kubenswrapper[4958]: I1206 06:12:43.249502 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" event={"ID":"64ab61d3-8a40-4d22-bae0-25f7dd034eda","Type":"ContainerStarted","Data":"c958defdff223e6c765b3b9145fbf3c4aac224ae017e1473829e49588f3d2ae9"} Dec 06 06:12:43 crc kubenswrapper[4958]: I1206 06:12:43.269703 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" podStartSLOduration=1.716069435 podStartE2EDuration="2.269682984s" podCreationTimestamp="2025-12-06 06:12:41 +0000 UTC" firstStartedPulling="2025-12-06 06:12:42.212002119 +0000 UTC m=+2672.745772892" lastFinishedPulling="2025-12-06 06:12:42.765615678 +0000 UTC m=+2673.299386441" observedRunningTime="2025-12-06 06:12:43.26583079 +0000 UTC m=+2673.799601553" watchObservedRunningTime="2025-12-06 06:12:43.269682984 +0000 UTC m=+2673.803453747" Dec 06 06:12:53 crc kubenswrapper[4958]: I1206 06:12:53.762076 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:12:53 crc kubenswrapper[4958]: E1206 06:12:53.763017 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:13:04 crc kubenswrapper[4958]: I1206 06:13:04.762153 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:13:04 crc kubenswrapper[4958]: E1206 06:13:04.763027 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:13:16 crc kubenswrapper[4958]: I1206 06:13:16.762572 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:13:16 crc kubenswrapper[4958]: E1206 06:13:16.763374 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:13:28 crc kubenswrapper[4958]: I1206 06:13:28.763079 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:13:28 crc kubenswrapper[4958]: E1206 06:13:28.764040 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:13:32 crc kubenswrapper[4958]: I1206 06:13:32.750757 4958 generic.go:334] "Generic (PLEG): container finished" podID="64ab61d3-8a40-4d22-bae0-25f7dd034eda" containerID="c958defdff223e6c765b3b9145fbf3c4aac224ae017e1473829e49588f3d2ae9" exitCode=0 Dec 06 06:13:32 crc kubenswrapper[4958]: I1206 06:13:32.750832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" event={"ID":"64ab61d3-8a40-4d22-bae0-25f7dd034eda","Type":"ContainerDied","Data":"c958defdff223e6c765b3b9145fbf3c4aac224ae017e1473829e49588f3d2ae9"} Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.281725 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368293 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368440 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sljqh\" (UniqueName: \"kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368464 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368537 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368599 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.368658 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0\") pod \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\" (UID: \"64ab61d3-8a40-4d22-bae0-25f7dd034eda\") " Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.373949 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh" (OuterVolumeSpecName: "kube-api-access-sljqh") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "kube-api-access-sljqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.375723 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.397090 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.399654 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.401580 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory" (OuterVolumeSpecName: "inventory") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.419396 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "64ab61d3-8a40-4d22-bae0-25f7dd034eda" (UID: "64ab61d3-8a40-4d22-bae0-25f7dd034eda"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471089 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471128 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471140 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sljqh\" (UniqueName: \"kubernetes.io/projected/64ab61d3-8a40-4d22-bae0-25f7dd034eda-kube-api-access-sljqh\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471150 4958 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471161 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.471171 4958 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/64ab61d3-8a40-4d22-bae0-25f7dd034eda-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.768389 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" event={"ID":"64ab61d3-8a40-4d22-bae0-25f7dd034eda","Type":"ContainerDied","Data":"2f9926003530c43ec71dc43ed1409938e679cb0f4d9d13042a0cceb48838b294"} Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.768424 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f9926003530c43ec71dc43ed1409938e679cb0f4d9d13042a0cceb48838b294" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.768442 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.866727 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv"] Dec 06 06:13:34 crc kubenswrapper[4958]: E1206 06:13:34.867431 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="64ab61d3-8a40-4d22-bae0-25f7dd034eda" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.867455 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="64ab61d3-8a40-4d22-bae0-25f7dd034eda" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.867681 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="64ab61d3-8a40-4d22-bae0-25f7dd034eda" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.868541 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.873111 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.873142 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.873299 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.873398 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.873496 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.898236 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv"] Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.981242 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.981318 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.981343 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.981432 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:34 crc kubenswrapper[4958]: I1206 06:13:34.981672 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.083793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.083853 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.083881 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.083925 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.084037 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.088222 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.089180 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.091115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.091767 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.105403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.207266 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.728632 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv"] Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.734435 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:13:35 crc kubenswrapper[4958]: I1206 06:13:35.791185 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" event={"ID":"5378d94e-8c86-4393-9dc6-dda81d635c12","Type":"ContainerStarted","Data":"ac9d80c81d8bb1c408383c9152eb81fad1cda9dcad7ab193d421cdf3d006085b"} Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.143936 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.146677 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.155499 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.314532 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vns2p\" (UniqueName: \"kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.314699 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.314837 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.416902 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.417015 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vns2p\" (UniqueName: \"kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.417144 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.417719 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.417873 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.437493 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vns2p\" (UniqueName: \"kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p\") pod \"redhat-marketplace-wj9qc\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.527135 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.801284 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" event={"ID":"5378d94e-8c86-4393-9dc6-dda81d635c12","Type":"ContainerStarted","Data":"e81933d7882e64532286f00bfe20e053722865c7b45b21ba605546e5eeca61f8"} Dec 06 06:13:36 crc kubenswrapper[4958]: I1206 06:13:36.840116 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" podStartSLOduration=2.293473387 podStartE2EDuration="2.840095267s" podCreationTimestamp="2025-12-06 06:13:34 +0000 UTC" firstStartedPulling="2025-12-06 06:13:35.734218129 +0000 UTC m=+2726.267988892" lastFinishedPulling="2025-12-06 06:13:36.280840009 +0000 UTC m=+2726.814610772" observedRunningTime="2025-12-06 06:13:36.830723936 +0000 UTC m=+2727.364494699" watchObservedRunningTime="2025-12-06 06:13:36.840095267 +0000 UTC m=+2727.373866030" Dec 06 06:13:37 crc kubenswrapper[4958]: I1206 06:13:37.004210 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:37 crc kubenswrapper[4958]: W1206 06:13:37.013071 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1123209_44b8_4e5d_81e4_d188c4d45588.slice/crio-9af9ea53dbbce278b0cdca281f7c424af881d9a5a294cd1edca4283b5c329ecb WatchSource:0}: Error finding container 9af9ea53dbbce278b0cdca281f7c424af881d9a5a294cd1edca4283b5c329ecb: Status 404 returned error can't find the container with id 9af9ea53dbbce278b0cdca281f7c424af881d9a5a294cd1edca4283b5c329ecb Dec 06 06:13:37 crc kubenswrapper[4958]: I1206 06:13:37.812823 4958 generic.go:334] "Generic (PLEG): container finished" podID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerID="2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018" exitCode=0 Dec 06 06:13:37 crc kubenswrapper[4958]: I1206 06:13:37.812934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerDied","Data":"2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018"} Dec 06 06:13:37 crc kubenswrapper[4958]: I1206 06:13:37.812987 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerStarted","Data":"9af9ea53dbbce278b0cdca281f7c424af881d9a5a294cd1edca4283b5c329ecb"} Dec 06 06:13:40 crc kubenswrapper[4958]: I1206 06:13:40.763058 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:13:40 crc kubenswrapper[4958]: I1206 06:13:40.842604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerStarted","Data":"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f"} Dec 06 06:13:41 crc kubenswrapper[4958]: I1206 06:13:41.857646 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a"} Dec 06 06:13:41 crc kubenswrapper[4958]: I1206 06:13:41.863718 4958 generic.go:334] "Generic (PLEG): container finished" podID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerID="aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f" exitCode=0 Dec 06 06:13:41 crc kubenswrapper[4958]: I1206 06:13:41.863752 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerDied","Data":"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f"} Dec 06 06:13:42 crc kubenswrapper[4958]: I1206 06:13:42.873869 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerStarted","Data":"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593"} Dec 06 06:13:42 crc kubenswrapper[4958]: I1206 06:13:42.896083 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wj9qc" podStartSLOduration=2.272980906 podStartE2EDuration="6.896065732s" podCreationTimestamp="2025-12-06 06:13:36 +0000 UTC" firstStartedPulling="2025-12-06 06:13:37.81486823 +0000 UTC m=+2728.348638993" lastFinishedPulling="2025-12-06 06:13:42.437953056 +0000 UTC m=+2732.971723819" observedRunningTime="2025-12-06 06:13:42.887970195 +0000 UTC m=+2733.421740958" watchObservedRunningTime="2025-12-06 06:13:42.896065732 +0000 UTC m=+2733.429836495" Dec 06 06:13:46 crc kubenswrapper[4958]: I1206 06:13:46.527352 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:46 crc kubenswrapper[4958]: I1206 06:13:46.527907 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:46 crc kubenswrapper[4958]: I1206 06:13:46.575949 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.768574 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.771080 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.789824 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.838976 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4km\" (UniqueName: \"kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.839161 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.839200 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.940876 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.940924 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.941054 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z4km\" (UniqueName: \"kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.942377 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.942635 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:50 crc kubenswrapper[4958]: I1206 06:13:50.978375 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z4km\" (UniqueName: \"kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km\") pod \"certified-operators-xxsvg\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:51 crc kubenswrapper[4958]: I1206 06:13:51.088073 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:13:51 crc kubenswrapper[4958]: I1206 06:13:51.584528 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:13:51 crc kubenswrapper[4958]: I1206 06:13:51.965921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerStarted","Data":"234e69e01f25ff5d9b0c2aac1d3dd4185907c8223690733c8899a8cb651a943e"} Dec 06 06:13:52 crc kubenswrapper[4958]: I1206 06:13:52.987451 4958 generic.go:334] "Generic (PLEG): container finished" podID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerID="926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0" exitCode=0 Dec 06 06:13:52 crc kubenswrapper[4958]: I1206 06:13:52.987528 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerDied","Data":"926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0"} Dec 06 06:13:54 crc kubenswrapper[4958]: I1206 06:13:53.999521 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerStarted","Data":"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa"} Dec 06 06:13:55 crc kubenswrapper[4958]: I1206 06:13:55.011812 4958 generic.go:334] "Generic (PLEG): container finished" podID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerID="daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa" exitCode=0 Dec 06 06:13:55 crc kubenswrapper[4958]: I1206 06:13:55.011865 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerDied","Data":"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa"} Dec 06 06:13:56 crc kubenswrapper[4958]: I1206 06:13:56.587143 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.041008 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerStarted","Data":"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288"} Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.062923 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xxsvg" podStartSLOduration=3.865773708 podStartE2EDuration="7.062900452s" podCreationTimestamp="2025-12-06 06:13:50 +0000 UTC" firstStartedPulling="2025-12-06 06:13:52.990556926 +0000 UTC m=+2743.524327689" lastFinishedPulling="2025-12-06 06:13:56.18768367 +0000 UTC m=+2746.721454433" observedRunningTime="2025-12-06 06:13:57.058408482 +0000 UTC m=+2747.592179265" watchObservedRunningTime="2025-12-06 06:13:57.062900452 +0000 UTC m=+2747.596671215" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.112228 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.112499 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wj9qc" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="registry-server" containerID="cri-o://20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593" gracePeriod=2 Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.573977 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.675959 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content\") pod \"b1123209-44b8-4e5d-81e4-d188c4d45588\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.676035 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities\") pod \"b1123209-44b8-4e5d-81e4-d188c4d45588\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.676277 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vns2p\" (UniqueName: \"kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p\") pod \"b1123209-44b8-4e5d-81e4-d188c4d45588\" (UID: \"b1123209-44b8-4e5d-81e4-d188c4d45588\") " Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.678147 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities" (OuterVolumeSpecName: "utilities") pod "b1123209-44b8-4e5d-81e4-d188c4d45588" (UID: "b1123209-44b8-4e5d-81e4-d188c4d45588"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.683261 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p" (OuterVolumeSpecName: "kube-api-access-vns2p") pod "b1123209-44b8-4e5d-81e4-d188c4d45588" (UID: "b1123209-44b8-4e5d-81e4-d188c4d45588"). InnerVolumeSpecName "kube-api-access-vns2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.696451 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1123209-44b8-4e5d-81e4-d188c4d45588" (UID: "b1123209-44b8-4e5d-81e4-d188c4d45588"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.786540 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.786584 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1123209-44b8-4e5d-81e4-d188c4d45588-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:57 crc kubenswrapper[4958]: I1206 06:13:57.786597 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vns2p\" (UniqueName: \"kubernetes.io/projected/b1123209-44b8-4e5d-81e4-d188c4d45588-kube-api-access-vns2p\") on node \"crc\" DevicePath \"\"" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.056427 4958 generic.go:334] "Generic (PLEG): container finished" podID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerID="20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593" exitCode=0 Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.057465 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wj9qc" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.057991 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerDied","Data":"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593"} Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.058021 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wj9qc" event={"ID":"b1123209-44b8-4e5d-81e4-d188c4d45588","Type":"ContainerDied","Data":"9af9ea53dbbce278b0cdca281f7c424af881d9a5a294cd1edca4283b5c329ecb"} Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.058039 4958 scope.go:117] "RemoveContainer" containerID="20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.092360 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.094594 4958 scope.go:117] "RemoveContainer" containerID="aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.102284 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wj9qc"] Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.137687 4958 scope.go:117] "RemoveContainer" containerID="2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.186774 4958 scope.go:117] "RemoveContainer" containerID="20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593" Dec 06 06:13:58 crc kubenswrapper[4958]: E1206 06:13:58.187553 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593\": container with ID starting with 20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593 not found: ID does not exist" containerID="20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.187610 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593"} err="failed to get container status \"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593\": rpc error: code = NotFound desc = could not find container \"20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593\": container with ID starting with 20b0ee010a0b82354454f0ca25097b3144a549fb93d23f75169ae128994b7593 not found: ID does not exist" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.187644 4958 scope.go:117] "RemoveContainer" containerID="aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f" Dec 06 06:13:58 crc kubenswrapper[4958]: E1206 06:13:58.188098 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f\": container with ID starting with aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f not found: ID does not exist" containerID="aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.188135 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f"} err="failed to get container status \"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f\": rpc error: code = NotFound desc = could not find container \"aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f\": container with ID starting with aee431ddf0673441c822d3bf725957cd2d8ef1dd687ebb39426225ccc9abfe7f not found: ID does not exist" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.188157 4958 scope.go:117] "RemoveContainer" containerID="2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018" Dec 06 06:13:58 crc kubenswrapper[4958]: E1206 06:13:58.188500 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018\": container with ID starting with 2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018 not found: ID does not exist" containerID="2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018" Dec 06 06:13:58 crc kubenswrapper[4958]: I1206 06:13:58.188534 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018"} err="failed to get container status \"2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018\": rpc error: code = NotFound desc = could not find container \"2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018\": container with ID starting with 2414c92fb7c79ccc6fa64b6f054bf96b65d653980afebde3d7c2d06bb9f97018 not found: ID does not exist" Dec 06 06:13:59 crc kubenswrapper[4958]: I1206 06:13:59.779254 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" path="/var/lib/kubelet/pods/b1123209-44b8-4e5d-81e4-d188c4d45588/volumes" Dec 06 06:14:01 crc kubenswrapper[4958]: I1206 06:14:01.088322 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:01 crc kubenswrapper[4958]: I1206 06:14:01.089571 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:01 crc kubenswrapper[4958]: I1206 06:14:01.139259 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:02 crc kubenswrapper[4958]: I1206 06:14:02.143789 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:02 crc kubenswrapper[4958]: I1206 06:14:02.308365 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.109401 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xxsvg" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="registry-server" containerID="cri-o://6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288" gracePeriod=2 Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.586383 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.621491 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7z4km\" (UniqueName: \"kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km\") pod \"fdefabdb-ec06-491e-afea-b95ac109ac4e\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.621552 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities\") pod \"fdefabdb-ec06-491e-afea-b95ac109ac4e\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.621662 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content\") pod \"fdefabdb-ec06-491e-afea-b95ac109ac4e\" (UID: \"fdefabdb-ec06-491e-afea-b95ac109ac4e\") " Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.624950 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities" (OuterVolumeSpecName: "utilities") pod "fdefabdb-ec06-491e-afea-b95ac109ac4e" (UID: "fdefabdb-ec06-491e-afea-b95ac109ac4e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.634695 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km" (OuterVolumeSpecName: "kube-api-access-7z4km") pod "fdefabdb-ec06-491e-afea-b95ac109ac4e" (UID: "fdefabdb-ec06-491e-afea-b95ac109ac4e"). InnerVolumeSpecName "kube-api-access-7z4km". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.713393 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdefabdb-ec06-491e-afea-b95ac109ac4e" (UID: "fdefabdb-ec06-491e-afea-b95ac109ac4e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.724197 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7z4km\" (UniqueName: \"kubernetes.io/projected/fdefabdb-ec06-491e-afea-b95ac109ac4e-kube-api-access-7z4km\") on node \"crc\" DevicePath \"\"" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.724242 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:14:04 crc kubenswrapper[4958]: I1206 06:14:04.724259 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdefabdb-ec06-491e-afea-b95ac109ac4e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.120687 4958 generic.go:334] "Generic (PLEG): container finished" podID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerID="6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288" exitCode=0 Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.120748 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xxsvg" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.120767 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerDied","Data":"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288"} Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.122323 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xxsvg" event={"ID":"fdefabdb-ec06-491e-afea-b95ac109ac4e","Type":"ContainerDied","Data":"234e69e01f25ff5d9b0c2aac1d3dd4185907c8223690733c8899a8cb651a943e"} Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.122367 4958 scope.go:117] "RemoveContainer" containerID="6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.169604 4958 scope.go:117] "RemoveContainer" containerID="daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.172442 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.183349 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xxsvg"] Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.202264 4958 scope.go:117] "RemoveContainer" containerID="926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.242225 4958 scope.go:117] "RemoveContainer" containerID="6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288" Dec 06 06:14:05 crc kubenswrapper[4958]: E1206 06:14:05.242712 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288\": container with ID starting with 6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288 not found: ID does not exist" containerID="6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.242768 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288"} err="failed to get container status \"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288\": rpc error: code = NotFound desc = could not find container \"6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288\": container with ID starting with 6a01c34e72547101e5190dbb3305eb5f9adfadea85c8303ac6f3d0b63d05c288 not found: ID does not exist" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.242803 4958 scope.go:117] "RemoveContainer" containerID="daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa" Dec 06 06:14:05 crc kubenswrapper[4958]: E1206 06:14:05.243129 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa\": container with ID starting with daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa not found: ID does not exist" containerID="daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.243236 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa"} err="failed to get container status \"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa\": rpc error: code = NotFound desc = could not find container \"daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa\": container with ID starting with daf7aa16c8597bb6fb142d5eff337dd2595b8cc8dd522d451c9b0bb16e0e8eaa not found: ID does not exist" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.243323 4958 scope.go:117] "RemoveContainer" containerID="926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0" Dec 06 06:14:05 crc kubenswrapper[4958]: E1206 06:14:05.243701 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0\": container with ID starting with 926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0 not found: ID does not exist" containerID="926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.243732 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0"} err="failed to get container status \"926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0\": rpc error: code = NotFound desc = could not find container \"926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0\": container with ID starting with 926fafe1365b4e23ec595992c78683e31477547a098742796af3510e24737de0 not found: ID does not exist" Dec 06 06:14:05 crc kubenswrapper[4958]: I1206 06:14:05.773789 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" path="/var/lib/kubelet/pods/fdefabdb-ec06-491e-afea-b95ac109ac4e/volumes" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.150784 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs"] Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.151878 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="extract-content" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.151926 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="extract-content" Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.151943 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="extract-utilities" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.151950 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="extract-utilities" Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.151970 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="extract-utilities" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.151976 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="extract-utilities" Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.151987 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="extract-content" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.151993 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="extract-content" Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.152001 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.152008 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: E1206 06:15:00.152027 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.152032 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.152210 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdefabdb-ec06-491e-afea-b95ac109ac4e" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.152228 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1123209-44b8-4e5d-81e4-d188c4d45588" containerName="registry-server" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.154590 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.156916 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.161062 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.163775 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs"] Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.234415 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.234657 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md4bx\" (UniqueName: \"kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.234689 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.336283 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.336435 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md4bx\" (UniqueName: \"kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.336458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.337353 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.348325 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.356256 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md4bx\" (UniqueName: \"kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx\") pod \"collect-profiles-29416695-8zcjs\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.479556 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:00 crc kubenswrapper[4958]: I1206 06:15:00.936520 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs"] Dec 06 06:15:01 crc kubenswrapper[4958]: I1206 06:15:01.643637 4958 generic.go:334] "Generic (PLEG): container finished" podID="bee7a71b-67a6-47c5-80f5-e1ab66d05404" containerID="2562ddcacdc60910ec59d730b122cde8d9134584e86eb1f198cad34d3a8cedbe" exitCode=0 Dec 06 06:15:01 crc kubenswrapper[4958]: I1206 06:15:01.643839 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" event={"ID":"bee7a71b-67a6-47c5-80f5-e1ab66d05404","Type":"ContainerDied","Data":"2562ddcacdc60910ec59d730b122cde8d9134584e86eb1f198cad34d3a8cedbe"} Dec 06 06:15:01 crc kubenswrapper[4958]: I1206 06:15:01.643939 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" event={"ID":"bee7a71b-67a6-47c5-80f5-e1ab66d05404","Type":"ContainerStarted","Data":"a0d8ac39252efab4770ff87f8af956c2d83fa26596a80ed8bf2423c2805fcf04"} Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.053466 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.086427 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume\") pod \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.086576 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md4bx\" (UniqueName: \"kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx\") pod \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.086740 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume\") pod \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\" (UID: \"bee7a71b-67a6-47c5-80f5-e1ab66d05404\") " Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.087433 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume" (OuterVolumeSpecName: "config-volume") pod "bee7a71b-67a6-47c5-80f5-e1ab66d05404" (UID: "bee7a71b-67a6-47c5-80f5-e1ab66d05404"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.087880 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bee7a71b-67a6-47c5-80f5-e1ab66d05404-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.092990 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx" (OuterVolumeSpecName: "kube-api-access-md4bx") pod "bee7a71b-67a6-47c5-80f5-e1ab66d05404" (UID: "bee7a71b-67a6-47c5-80f5-e1ab66d05404"). InnerVolumeSpecName "kube-api-access-md4bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.093020 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bee7a71b-67a6-47c5-80f5-e1ab66d05404" (UID: "bee7a71b-67a6-47c5-80f5-e1ab66d05404"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.189920 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bee7a71b-67a6-47c5-80f5-e1ab66d05404-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.189951 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md4bx\" (UniqueName: \"kubernetes.io/projected/bee7a71b-67a6-47c5-80f5-e1ab66d05404-kube-api-access-md4bx\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.663147 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" event={"ID":"bee7a71b-67a6-47c5-80f5-e1ab66d05404","Type":"ContainerDied","Data":"a0d8ac39252efab4770ff87f8af956c2d83fa26596a80ed8bf2423c2805fcf04"} Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.663202 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0d8ac39252efab4770ff87f8af956c2d83fa26596a80ed8bf2423c2805fcf04" Dec 06 06:15:03 crc kubenswrapper[4958]: I1206 06:15:03.663210 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs" Dec 06 06:15:04 crc kubenswrapper[4958]: I1206 06:15:04.130544 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4"] Dec 06 06:15:04 crc kubenswrapper[4958]: I1206 06:15:04.138672 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416650-s7hr4"] Dec 06 06:15:05 crc kubenswrapper[4958]: I1206 06:15:05.776013 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40faedf1-f03f-4c51-8577-f11f34488d09" path="/var/lib/kubelet/pods/40faedf1-f03f-4c51-8577-f11f34488d09/volumes" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.326724 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:07 crc kubenswrapper[4958]: E1206 06:15:07.327665 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bee7a71b-67a6-47c5-80f5-e1ab66d05404" containerName="collect-profiles" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.327681 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="bee7a71b-67a6-47c5-80f5-e1ab66d05404" containerName="collect-profiles" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.327927 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="bee7a71b-67a6-47c5-80f5-e1ab66d05404" containerName="collect-profiles" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.329836 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.339980 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.368663 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66nkg\" (UniqueName: \"kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.368860 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.368890 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.470713 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.470772 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.470906 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66nkg\" (UniqueName: \"kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.471380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.471380 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.512316 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66nkg\" (UniqueName: \"kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg\") pod \"redhat-operators-6g2mt\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:07 crc kubenswrapper[4958]: I1206 06:15:07.668377 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:08 crc kubenswrapper[4958]: I1206 06:15:08.149131 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:08 crc kubenswrapper[4958]: I1206 06:15:08.706156 4958 generic.go:334] "Generic (PLEG): container finished" podID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerID="6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127" exitCode=0 Dec 06 06:15:08 crc kubenswrapper[4958]: I1206 06:15:08.706238 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerDied","Data":"6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127"} Dec 06 06:15:08 crc kubenswrapper[4958]: I1206 06:15:08.706456 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerStarted","Data":"9a97276384991b9cb54b47ae3d8e62ff6c9df0b6be8c1fbb26714d04bd03d390"} Dec 06 06:15:09 crc kubenswrapper[4958]: I1206 06:15:09.717866 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerStarted","Data":"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376"} Dec 06 06:15:11 crc kubenswrapper[4958]: I1206 06:15:11.736930 4958 generic.go:334] "Generic (PLEG): container finished" podID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerID="2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376" exitCode=0 Dec 06 06:15:11 crc kubenswrapper[4958]: I1206 06:15:11.736992 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerDied","Data":"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376"} Dec 06 06:15:14 crc kubenswrapper[4958]: I1206 06:15:14.770673 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerStarted","Data":"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee"} Dec 06 06:15:14 crc kubenswrapper[4958]: I1206 06:15:14.791413 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6g2mt" podStartSLOduration=2.5296281130000002 podStartE2EDuration="7.791397268s" podCreationTimestamp="2025-12-06 06:15:07 +0000 UTC" firstStartedPulling="2025-12-06 06:15:08.708551383 +0000 UTC m=+2819.242322146" lastFinishedPulling="2025-12-06 06:15:13.970320538 +0000 UTC m=+2824.504091301" observedRunningTime="2025-12-06 06:15:14.788651324 +0000 UTC m=+2825.322422087" watchObservedRunningTime="2025-12-06 06:15:14.791397268 +0000 UTC m=+2825.325168031" Dec 06 06:15:17 crc kubenswrapper[4958]: I1206 06:15:17.669010 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:17 crc kubenswrapper[4958]: I1206 06:15:17.669320 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:18 crc kubenswrapper[4958]: I1206 06:15:18.712205 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6g2mt" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="registry-server" probeResult="failure" output=< Dec 06 06:15:18 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 06:15:18 crc kubenswrapper[4958]: > Dec 06 06:15:27 crc kubenswrapper[4958]: I1206 06:15:27.722066 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:27 crc kubenswrapper[4958]: I1206 06:15:27.777048 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:27 crc kubenswrapper[4958]: I1206 06:15:27.990153 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:28 crc kubenswrapper[4958]: I1206 06:15:28.939064 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6g2mt" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="registry-server" containerID="cri-o://4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee" gracePeriod=2 Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.441208 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.561041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities\") pod \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.561136 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content\") pod \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.561252 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66nkg\" (UniqueName: \"kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg\") pod \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\" (UID: \"3e0e97b4-80af-4cc8-98f4-36a10d02545f\") " Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.563352 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities" (OuterVolumeSpecName: "utilities") pod "3e0e97b4-80af-4cc8-98f4-36a10d02545f" (UID: "3e0e97b4-80af-4cc8-98f4-36a10d02545f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.567802 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg" (OuterVolumeSpecName: "kube-api-access-66nkg") pod "3e0e97b4-80af-4cc8-98f4-36a10d02545f" (UID: "3e0e97b4-80af-4cc8-98f4-36a10d02545f"). InnerVolumeSpecName "kube-api-access-66nkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.664218 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.664259 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66nkg\" (UniqueName: \"kubernetes.io/projected/3e0e97b4-80af-4cc8-98f4-36a10d02545f-kube-api-access-66nkg\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.685670 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3e0e97b4-80af-4cc8-98f4-36a10d02545f" (UID: "3e0e97b4-80af-4cc8-98f4-36a10d02545f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.766380 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3e0e97b4-80af-4cc8-98f4-36a10d02545f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.951601 4958 generic.go:334] "Generic (PLEG): container finished" podID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerID="4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee" exitCode=0 Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.951647 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerDied","Data":"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee"} Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.951694 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6g2mt" event={"ID":"3e0e97b4-80af-4cc8-98f4-36a10d02545f","Type":"ContainerDied","Data":"9a97276384991b9cb54b47ae3d8e62ff6c9df0b6be8c1fbb26714d04bd03d390"} Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.951690 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6g2mt" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.951826 4958 scope.go:117] "RemoveContainer" containerID="4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.980219 4958 scope.go:117] "RemoveContainer" containerID="2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376" Dec 06 06:15:29 crc kubenswrapper[4958]: I1206 06:15:29.988966 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.002144 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6g2mt"] Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.007301 4958 scope.go:117] "RemoveContainer" containerID="6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.056730 4958 scope.go:117] "RemoveContainer" containerID="4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee" Dec 06 06:15:30 crc kubenswrapper[4958]: E1206 06:15:30.057105 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee\": container with ID starting with 4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee not found: ID does not exist" containerID="4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.057147 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee"} err="failed to get container status \"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee\": rpc error: code = NotFound desc = could not find container \"4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee\": container with ID starting with 4d4d442d0312718d4ba1e699de1e495d65f96c72f1f457be8cd79f28226909ee not found: ID does not exist" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.057172 4958 scope.go:117] "RemoveContainer" containerID="2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376" Dec 06 06:15:30 crc kubenswrapper[4958]: E1206 06:15:30.057486 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376\": container with ID starting with 2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376 not found: ID does not exist" containerID="2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.057514 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376"} err="failed to get container status \"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376\": rpc error: code = NotFound desc = could not find container \"2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376\": container with ID starting with 2ee3cdf670c41e529fd75921c53765d4967c0862b58a137e130f9e767d196376 not found: ID does not exist" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.057532 4958 scope.go:117] "RemoveContainer" containerID="6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127" Dec 06 06:15:30 crc kubenswrapper[4958]: E1206 06:15:30.057748 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127\": container with ID starting with 6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127 not found: ID does not exist" containerID="6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.057772 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127"} err="failed to get container status \"6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127\": rpc error: code = NotFound desc = could not find container \"6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127\": container with ID starting with 6d98e8cd2590054ba56931708bda51df15808c1353f45a7b646909c4e7a83127 not found: ID does not exist" Dec 06 06:15:30 crc kubenswrapper[4958]: I1206 06:15:30.279064 4958 scope.go:117] "RemoveContainer" containerID="989c48790bf8935d614fc84cf222bf2437aea7bbfbc4742f34be129298cfa81d" Dec 06 06:15:31 crc kubenswrapper[4958]: I1206 06:15:31.773422 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" path="/var/lib/kubelet/pods/3e0e97b4-80af-4cc8-98f4-36a10d02545f/volumes" Dec 06 06:16:09 crc kubenswrapper[4958]: I1206 06:16:09.866610 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:16:09 crc kubenswrapper[4958]: I1206 06:16:09.867326 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:16:39 crc kubenswrapper[4958]: I1206 06:16:39.865986 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:16:39 crc kubenswrapper[4958]: I1206 06:16:39.867391 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:17:09 crc kubenswrapper[4958]: I1206 06:17:09.866147 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:17:09 crc kubenswrapper[4958]: I1206 06:17:09.866834 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:17:09 crc kubenswrapper[4958]: I1206 06:17:09.866882 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:17:09 crc kubenswrapper[4958]: I1206 06:17:09.910161 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:17:09 crc kubenswrapper[4958]: I1206 06:17:09.910260 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a" gracePeriod=600 Dec 06 06:17:10 crc kubenswrapper[4958]: I1206 06:17:10.921636 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a" exitCode=0 Dec 06 06:17:10 crc kubenswrapper[4958]: I1206 06:17:10.921703 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a"} Dec 06 06:17:10 crc kubenswrapper[4958]: I1206 06:17:10.922288 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad"} Dec 06 06:17:10 crc kubenswrapper[4958]: I1206 06:17:10.922324 4958 scope.go:117] "RemoveContainer" containerID="78722ed8e9c7d0ba1e717d3279a2f99d9ad86a7c8ebd7a1c9aa99d270d4637d9" Dec 06 06:17:47 crc kubenswrapper[4958]: I1206 06:17:47.259717 4958 generic.go:334] "Generic (PLEG): container finished" podID="5378d94e-8c86-4393-9dc6-dda81d635c12" containerID="e81933d7882e64532286f00bfe20e053722865c7b45b21ba605546e5eeca61f8" exitCode=0 Dec 06 06:17:47 crc kubenswrapper[4958]: I1206 06:17:47.259814 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" event={"ID":"5378d94e-8c86-4393-9dc6-dda81d635c12","Type":"ContainerDied","Data":"e81933d7882e64532286f00bfe20e053722865c7b45b21ba605546e5eeca61f8"} Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.672945 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.751201 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd\") pod \"5378d94e-8c86-4393-9dc6-dda81d635c12\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.751275 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0\") pod \"5378d94e-8c86-4393-9dc6-dda81d635c12\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.751367 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key\") pod \"5378d94e-8c86-4393-9dc6-dda81d635c12\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.751396 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle\") pod \"5378d94e-8c86-4393-9dc6-dda81d635c12\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.751448 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory\") pod \"5378d94e-8c86-4393-9dc6-dda81d635c12\" (UID: \"5378d94e-8c86-4393-9dc6-dda81d635c12\") " Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.757415 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd" (OuterVolumeSpecName: "kube-api-access-sc4zd") pod "5378d94e-8c86-4393-9dc6-dda81d635c12" (UID: "5378d94e-8c86-4393-9dc6-dda81d635c12"). InnerVolumeSpecName "kube-api-access-sc4zd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.758786 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "5378d94e-8c86-4393-9dc6-dda81d635c12" (UID: "5378d94e-8c86-4393-9dc6-dda81d635c12"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.779137 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5378d94e-8c86-4393-9dc6-dda81d635c12" (UID: "5378d94e-8c86-4393-9dc6-dda81d635c12"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.781774 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory" (OuterVolumeSpecName: "inventory") pod "5378d94e-8c86-4393-9dc6-dda81d635c12" (UID: "5378d94e-8c86-4393-9dc6-dda81d635c12"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.801936 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "5378d94e-8c86-4393-9dc6-dda81d635c12" (UID: "5378d94e-8c86-4393-9dc6-dda81d635c12"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.854226 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.854260 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.854271 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sc4zd\" (UniqueName: \"kubernetes.io/projected/5378d94e-8c86-4393-9dc6-dda81d635c12-kube-api-access-sc4zd\") on node \"crc\" DevicePath \"\"" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.854281 4958 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:17:48 crc kubenswrapper[4958]: I1206 06:17:48.854288 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5378d94e-8c86-4393-9dc6-dda81d635c12-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.277856 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" event={"ID":"5378d94e-8c86-4393-9dc6-dda81d635c12","Type":"ContainerDied","Data":"ac9d80c81d8bb1c408383c9152eb81fad1cda9dcad7ab193d421cdf3d006085b"} Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.278278 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9d80c81d8bb1c408383c9152eb81fad1cda9dcad7ab193d421cdf3d006085b" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.277976 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.375727 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6"] Dec 06 06:17:49 crc kubenswrapper[4958]: E1206 06:17:49.376135 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="extract-utilities" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376152 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="extract-utilities" Dec 06 06:17:49 crc kubenswrapper[4958]: E1206 06:17:49.376179 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5378d94e-8c86-4393-9dc6-dda81d635c12" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376187 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="5378d94e-8c86-4393-9dc6-dda81d635c12" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 06:17:49 crc kubenswrapper[4958]: E1206 06:17:49.376202 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="extract-content" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376207 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="extract-content" Dec 06 06:17:49 crc kubenswrapper[4958]: E1206 06:17:49.376236 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="registry-server" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376243 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="registry-server" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376449 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="5378d94e-8c86-4393-9dc6-dda81d635c12" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.376465 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e0e97b4-80af-4cc8-98f4-36a10d02545f" containerName="registry-server" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.377153 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382075 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382120 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382214 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382349 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382505 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.382519 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.383518 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.400637 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6"] Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.470461 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.470735 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.470924 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.470991 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b45c\" (UniqueName: \"kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.471038 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.471079 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.471110 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.471245 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.471294 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573541 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b45c\" (UniqueName: \"kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573599 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573627 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573648 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573751 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573793 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573859 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.573905 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.574959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.580916 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.580918 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.580957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.583434 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.583734 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.584627 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.588798 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.599817 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b45c\" (UniqueName: \"kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c\") pod \"nova-edpm-deployment-openstack-edpm-ipam-sbjj6\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:49 crc kubenswrapper[4958]: I1206 06:17:49.697686 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:17:50 crc kubenswrapper[4958]: I1206 06:17:50.237999 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6"] Dec 06 06:17:50 crc kubenswrapper[4958]: I1206 06:17:50.288561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" event={"ID":"d85c7ce5-d270-4525-83a5-266d104bcc79","Type":"ContainerStarted","Data":"3f87b71e999ca03ff2676789859ca0af7bbe46e8204312d9682b8470b04015cf"} Dec 06 06:17:51 crc kubenswrapper[4958]: I1206 06:17:51.302398 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" event={"ID":"d85c7ce5-d270-4525-83a5-266d104bcc79","Type":"ContainerStarted","Data":"29f20a874f65d24fc2e24655bb471515f83c44d8dc57fb97920f72b0d3a57734"} Dec 06 06:17:51 crc kubenswrapper[4958]: I1206 06:17:51.322653 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" podStartSLOduration=1.9239270400000001 podStartE2EDuration="2.322635613s" podCreationTimestamp="2025-12-06 06:17:49 +0000 UTC" firstStartedPulling="2025-12-06 06:17:50.241152048 +0000 UTC m=+2980.774922811" lastFinishedPulling="2025-12-06 06:17:50.639860621 +0000 UTC m=+2981.173631384" observedRunningTime="2025-12-06 06:17:51.317976408 +0000 UTC m=+2981.851747171" watchObservedRunningTime="2025-12-06 06:17:51.322635613 +0000 UTC m=+2981.856406376" Dec 06 06:19:39 crc kubenswrapper[4958]: I1206 06:19:39.865818 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:19:39 crc kubenswrapper[4958]: I1206 06:19:39.866454 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:20:09 crc kubenswrapper[4958]: I1206 06:20:09.866388 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:20:09 crc kubenswrapper[4958]: I1206 06:20:09.867831 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:20:38 crc kubenswrapper[4958]: I1206 06:20:38.000871 4958 generic.go:334] "Generic (PLEG): container finished" podID="d85c7ce5-d270-4525-83a5-266d104bcc79" containerID="29f20a874f65d24fc2e24655bb471515f83c44d8dc57fb97920f72b0d3a57734" exitCode=0 Dec 06 06:20:38 crc kubenswrapper[4958]: I1206 06:20:38.000934 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" event={"ID":"d85c7ce5-d270-4525-83a5-266d104bcc79","Type":"ContainerDied","Data":"29f20a874f65d24fc2e24655bb471515f83c44d8dc57fb97920f72b0d3a57734"} Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.057807 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.060708 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.085284 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.201126 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.201217 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dpxw\" (UniqueName: \"kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.201304 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.311787 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.312282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.312381 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dpxw\" (UniqueName: \"kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.312930 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.314523 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.333061 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dpxw\" (UniqueName: \"kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw\") pod \"community-operators-npxnw\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.386146 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.536730 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729239 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729683 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729754 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729792 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729858 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9b45c\" (UniqueName: \"kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729904 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.729982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.730043 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.730112 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0\") pod \"d85c7ce5-d270-4525-83a5-266d104bcc79\" (UID: \"d85c7ce5-d270-4525-83a5-266d104bcc79\") " Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.796873 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.826997 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c" (OuterVolumeSpecName: "kube-api-access-9b45c") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "kube-api-access-9b45c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.828135 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.834955 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9b45c\" (UniqueName: \"kubernetes.io/projected/d85c7ce5-d270-4525-83a5-266d104bcc79-kube-api-access-9b45c\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.834986 4958 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.834995 4958 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.846414 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.867196 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.867249 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.900139 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.904718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.905640 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.905710 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" gracePeriod=600 Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.915402 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory" (OuterVolumeSpecName: "inventory") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.918692 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.936610 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.937580 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.937642 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.937655 4958 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.937666 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.937676 4958 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:39 crc kubenswrapper[4958]: I1206 06:20:39.959566 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d85c7ce5-d270-4525-83a5-266d104bcc79" (UID: "d85c7ce5-d270-4525-83a5-266d104bcc79"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.023911 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" event={"ID":"d85c7ce5-d270-4525-83a5-266d104bcc79","Type":"ContainerDied","Data":"3f87b71e999ca03ff2676789859ca0af7bbe46e8204312d9682b8470b04015cf"} Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.023955 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f87b71e999ca03ff2676789859ca0af7bbe46e8204312d9682b8470b04015cf" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.023994 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-sbjj6" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.039820 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d85c7ce5-d270-4525-83a5-266d104bcc79-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:40 crc kubenswrapper[4958]: E1206 06:20:40.071621 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.112990 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m"] Dec 06 06:20:40 crc kubenswrapper[4958]: E1206 06:20:40.113422 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d85c7ce5-d270-4525-83a5-266d104bcc79" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.113439 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d85c7ce5-d270-4525-83a5-266d104bcc79" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.113617 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d85c7ce5-d270-4525-83a5-266d104bcc79" containerName="nova-edpm-deployment-openstack-edpm-ipam" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.114356 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.116892 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.117131 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.117150 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.117179 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dqr5b" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.117352 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.126305 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m"] Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.191647 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.245247 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.245300 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.245326 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.246040 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.246291 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kxpm\" (UniqueName: \"kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.246344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.246410 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348309 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348370 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348401 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348450 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348556 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8kxpm\" (UniqueName: \"kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348592 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.348636 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.353374 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.353813 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.354184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.355145 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.357111 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.361291 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.370643 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8kxpm\" (UniqueName: \"kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.437516 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.978487 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m"] Dec 06 06:20:40 crc kubenswrapper[4958]: W1206 06:20:40.979427 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfad70a10_a21d_4f57_b3f6_5e2349243973.slice/crio-7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23 WatchSource:0}: Error finding container 7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23: Status 404 returned error can't find the container with id 7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23 Dec 06 06:20:40 crc kubenswrapper[4958]: I1206 06:20:40.981984 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.032813 4958 generic.go:334] "Generic (PLEG): container finished" podID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerID="1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b" exitCode=0 Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.032920 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerDied","Data":"1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b"} Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.033067 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerStarted","Data":"431ed60d534792a2eecb8de0530ea09357a49402899dbaaee2b15c557ff97e90"} Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.035610 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" exitCode=0 Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.035673 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad"} Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.035703 4958 scope.go:117] "RemoveContainer" containerID="c86c9d4c02c43f332686713b14d9e13d995299506394c0fdebba92a66268415a" Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.036267 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:20:41 crc kubenswrapper[4958]: E1206 06:20:41.036567 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:20:41 crc kubenswrapper[4958]: I1206 06:20:41.039089 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" event={"ID":"fad70a10-a21d-4f57-b3f6-5e2349243973","Type":"ContainerStarted","Data":"7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23"} Dec 06 06:20:42 crc kubenswrapper[4958]: I1206 06:20:42.048875 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerStarted","Data":"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3"} Dec 06 06:20:42 crc kubenswrapper[4958]: I1206 06:20:42.055347 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" event={"ID":"fad70a10-a21d-4f57-b3f6-5e2349243973","Type":"ContainerStarted","Data":"df685d86da97d358e3f0472c8099d50c4283c432852e706707ddb0604849245e"} Dec 06 06:20:42 crc kubenswrapper[4958]: I1206 06:20:42.089449 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" podStartSLOduration=1.656498907 podStartE2EDuration="2.089429739s" podCreationTimestamp="2025-12-06 06:20:40 +0000 UTC" firstStartedPulling="2025-12-06 06:20:40.98170441 +0000 UTC m=+3151.515475173" lastFinishedPulling="2025-12-06 06:20:41.414635232 +0000 UTC m=+3151.948406005" observedRunningTime="2025-12-06 06:20:42.082681918 +0000 UTC m=+3152.616452681" watchObservedRunningTime="2025-12-06 06:20:42.089429739 +0000 UTC m=+3152.623200522" Dec 06 06:20:43 crc kubenswrapper[4958]: I1206 06:20:43.069407 4958 generic.go:334] "Generic (PLEG): container finished" podID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerID="a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3" exitCode=0 Dec 06 06:20:43 crc kubenswrapper[4958]: I1206 06:20:43.069506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerDied","Data":"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3"} Dec 06 06:20:44 crc kubenswrapper[4958]: I1206 06:20:44.082590 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerStarted","Data":"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a"} Dec 06 06:20:44 crc kubenswrapper[4958]: I1206 06:20:44.119848 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-npxnw" podStartSLOduration=2.690460559 podStartE2EDuration="5.119823912s" podCreationTimestamp="2025-12-06 06:20:39 +0000 UTC" firstStartedPulling="2025-12-06 06:20:41.03798072 +0000 UTC m=+3151.571751483" lastFinishedPulling="2025-12-06 06:20:43.467344073 +0000 UTC m=+3154.001114836" observedRunningTime="2025-12-06 06:20:44.111604982 +0000 UTC m=+3154.645375755" watchObservedRunningTime="2025-12-06 06:20:44.119823912 +0000 UTC m=+3154.653594675" Dec 06 06:20:49 crc kubenswrapper[4958]: I1206 06:20:49.387424 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:49 crc kubenswrapper[4958]: I1206 06:20:49.388041 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:49 crc kubenswrapper[4958]: I1206 06:20:49.444322 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:50 crc kubenswrapper[4958]: I1206 06:20:50.196433 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:50 crc kubenswrapper[4958]: I1206 06:20:50.242585 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:52 crc kubenswrapper[4958]: I1206 06:20:52.147852 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-npxnw" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="registry-server" containerID="cri-o://56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a" gracePeriod=2 Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.161009 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.161241 4958 generic.go:334] "Generic (PLEG): container finished" podID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerID="56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a" exitCode=0 Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.161276 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerDied","Data":"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a"} Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.161459 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-npxnw" event={"ID":"48808921-27ec-4f30-9be8-d6fbdd088e07","Type":"ContainerDied","Data":"431ed60d534792a2eecb8de0530ea09357a49402899dbaaee2b15c557ff97e90"} Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.161499 4958 scope.go:117] "RemoveContainer" containerID="56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.211987 4958 scope.go:117] "RemoveContainer" containerID="a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.243299 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dpxw\" (UniqueName: \"kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw\") pod \"48808921-27ec-4f30-9be8-d6fbdd088e07\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.243382 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content\") pod \"48808921-27ec-4f30-9be8-d6fbdd088e07\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.243442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities\") pod \"48808921-27ec-4f30-9be8-d6fbdd088e07\" (UID: \"48808921-27ec-4f30-9be8-d6fbdd088e07\") " Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.245048 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities" (OuterVolumeSpecName: "utilities") pod "48808921-27ec-4f30-9be8-d6fbdd088e07" (UID: "48808921-27ec-4f30-9be8-d6fbdd088e07"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.246552 4958 scope.go:117] "RemoveContainer" containerID="1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.250195 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw" (OuterVolumeSpecName: "kube-api-access-9dpxw") pod "48808921-27ec-4f30-9be8-d6fbdd088e07" (UID: "48808921-27ec-4f30-9be8-d6fbdd088e07"). InnerVolumeSpecName "kube-api-access-9dpxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.304749 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48808921-27ec-4f30-9be8-d6fbdd088e07" (UID: "48808921-27ec-4f30-9be8-d6fbdd088e07"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.340429 4958 scope.go:117] "RemoveContainer" containerID="56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a" Dec 06 06:20:53 crc kubenswrapper[4958]: E1206 06:20:53.341434 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a\": container with ID starting with 56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a not found: ID does not exist" containerID="56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.341520 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a"} err="failed to get container status \"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a\": rpc error: code = NotFound desc = could not find container \"56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a\": container with ID starting with 56988f5387a13251fa1585a93a1c82cb45aaa1edfef2125b1383065a8e2a8f2a not found: ID does not exist" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.341598 4958 scope.go:117] "RemoveContainer" containerID="a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3" Dec 06 06:20:53 crc kubenswrapper[4958]: E1206 06:20:53.342907 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3\": container with ID starting with a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3 not found: ID does not exist" containerID="a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.342932 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3"} err="failed to get container status \"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3\": rpc error: code = NotFound desc = could not find container \"a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3\": container with ID starting with a3f8b923cfd4282e07c4512c14a588e6f69fe3f7d196a4fe2d43bec21a6d72a3 not found: ID does not exist" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.342979 4958 scope.go:117] "RemoveContainer" containerID="1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b" Dec 06 06:20:53 crc kubenswrapper[4958]: E1206 06:20:53.343274 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b\": container with ID starting with 1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b not found: ID does not exist" containerID="1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.343305 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b"} err="failed to get container status \"1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b\": rpc error: code = NotFound desc = could not find container \"1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b\": container with ID starting with 1684d425efc7ef43a7227b0529d4ce9d253ddab3b6d6f3a421c6988f4217c59b not found: ID does not exist" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.345871 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dpxw\" (UniqueName: \"kubernetes.io/projected/48808921-27ec-4f30-9be8-d6fbdd088e07-kube-api-access-9dpxw\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.345895 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:53 crc kubenswrapper[4958]: I1206 06:20:53.345906 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48808921-27ec-4f30-9be8-d6fbdd088e07-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:20:54 crc kubenswrapper[4958]: I1206 06:20:54.170582 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-npxnw" Dec 06 06:20:54 crc kubenswrapper[4958]: I1206 06:20:54.200874 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:54 crc kubenswrapper[4958]: I1206 06:20:54.210509 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-npxnw"] Dec 06 06:20:54 crc kubenswrapper[4958]: I1206 06:20:54.762897 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:20:54 crc kubenswrapper[4958]: E1206 06:20:54.763570 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:20:55 crc kubenswrapper[4958]: I1206 06:20:55.773727 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" path="/var/lib/kubelet/pods/48808921-27ec-4f30-9be8-d6fbdd088e07/volumes" Dec 06 06:21:07 crc kubenswrapper[4958]: I1206 06:21:07.763431 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:21:07 crc kubenswrapper[4958]: E1206 06:21:07.764446 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:21:18 crc kubenswrapper[4958]: I1206 06:21:18.762673 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:21:18 crc kubenswrapper[4958]: E1206 06:21:18.763561 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:21:33 crc kubenswrapper[4958]: I1206 06:21:33.762827 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:21:33 crc kubenswrapper[4958]: E1206 06:21:33.763492 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:21:48 crc kubenswrapper[4958]: I1206 06:21:48.761911 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:21:48 crc kubenswrapper[4958]: E1206 06:21:48.762938 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:22:01 crc kubenswrapper[4958]: I1206 06:22:01.762278 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:22:01 crc kubenswrapper[4958]: E1206 06:22:01.763996 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:22:13 crc kubenswrapper[4958]: I1206 06:22:13.762832 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:22:13 crc kubenswrapper[4958]: E1206 06:22:13.763901 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:22:26 crc kubenswrapper[4958]: I1206 06:22:26.762293 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:22:26 crc kubenswrapper[4958]: E1206 06:22:26.762974 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:22:37 crc kubenswrapper[4958]: I1206 06:22:37.762091 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:22:37 crc kubenswrapper[4958]: E1206 06:22:37.762876 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:22:52 crc kubenswrapper[4958]: I1206 06:22:52.762464 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:22:52 crc kubenswrapper[4958]: E1206 06:22:52.763645 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:23:03 crc kubenswrapper[4958]: I1206 06:23:03.767228 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:23:03 crc kubenswrapper[4958]: E1206 06:23:03.769419 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:23:04 crc kubenswrapper[4958]: I1206 06:23:04.586663 4958 generic.go:334] "Generic (PLEG): container finished" podID="fad70a10-a21d-4f57-b3f6-5e2349243973" containerID="df685d86da97d358e3f0472c8099d50c4283c432852e706707ddb0604849245e" exitCode=0 Dec 06 06:23:04 crc kubenswrapper[4958]: I1206 06:23:04.586780 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" event={"ID":"fad70a10-a21d-4f57-b3f6-5e2349243973","Type":"ContainerDied","Data":"df685d86da97d358e3f0472c8099d50c4283c432852e706707ddb0604849245e"} Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.072761 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191142 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191268 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191360 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191473 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.191577 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8kxpm\" (UniqueName: \"kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.192072 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2\") pod \"fad70a10-a21d-4f57-b3f6-5e2349243973\" (UID: \"fad70a10-a21d-4f57-b3f6-5e2349243973\") " Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.196659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm" (OuterVolumeSpecName: "kube-api-access-8kxpm") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "kube-api-access-8kxpm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.197253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.218917 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.220781 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.224028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.234530 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.236849 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory" (OuterVolumeSpecName: "inventory") pod "fad70a10-a21d-4f57-b3f6-5e2349243973" (UID: "fad70a10-a21d-4f57-b3f6-5e2349243973"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295448 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295495 4958 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295508 4958 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-inventory\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295516 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295526 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8kxpm\" (UniqueName: \"kubernetes.io/projected/fad70a10-a21d-4f57-b3f6-5e2349243973-kube-api-access-8kxpm\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295535 4958 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.295545 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fad70a10-a21d-4f57-b3f6-5e2349243973-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.604726 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" event={"ID":"fad70a10-a21d-4f57-b3f6-5e2349243973","Type":"ContainerDied","Data":"7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23"} Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.604772 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e88198dd4e8be3c9e1540e2e4182f13f1fc819aa410ebef306f04dd43ecee23" Dec 06 06:23:06 crc kubenswrapper[4958]: I1206 06:23:06.604788 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m" Dec 06 06:23:16 crc kubenswrapper[4958]: I1206 06:23:16.762958 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:23:16 crc kubenswrapper[4958]: E1206 06:23:16.763875 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:23:29 crc kubenswrapper[4958]: I1206 06:23:29.773388 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:23:29 crc kubenswrapper[4958]: E1206 06:23:29.774314 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:23:43 crc kubenswrapper[4958]: I1206 06:23:43.762764 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:23:43 crc kubenswrapper[4958]: E1206 06:23:43.765063 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:23:47 crc kubenswrapper[4958]: I1206 06:23:47.186559 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:47 crc kubenswrapper[4958]: I1206 06:23:47.187401 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="prometheus" containerID="cri-o://14c77bc787711132eb1716198943546be558491097489218c137e60554577819" gracePeriod=600 Dec 06 06:23:47 crc kubenswrapper[4958]: I1206 06:23:47.187530 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="config-reloader" containerID="cri-o://4e04d1dda7193b4f1ee2aabc0bf5934ec47da52e69d17c67a8c71a39edf7e20d" gracePeriod=600 Dec 06 06:23:47 crc kubenswrapper[4958]: I1206 06:23:47.187547 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="thanos-sidecar" containerID="cri-o://39bacaa0a59e9ab7af6b7d36f6a73a2b4b83a49e2d4d32eea14fbe77753677cd" gracePeriod=600 Dec 06 06:23:47 crc kubenswrapper[4958]: I1206 06:23:47.863629 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="prometheus" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148592 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9711939-7159-4a5a-970f-426286de1f36" containerID="39bacaa0a59e9ab7af6b7d36f6a73a2b4b83a49e2d4d32eea14fbe77753677cd" exitCode=0 Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148627 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9711939-7159-4a5a-970f-426286de1f36" containerID="4e04d1dda7193b4f1ee2aabc0bf5934ec47da52e69d17c67a8c71a39edf7e20d" exitCode=0 Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148637 4958 generic.go:334] "Generic (PLEG): container finished" podID="e9711939-7159-4a5a-970f-426286de1f36" containerID="14c77bc787711132eb1716198943546be558491097489218c137e60554577819" exitCode=0 Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148663 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerDied","Data":"39bacaa0a59e9ab7af6b7d36f6a73a2b4b83a49e2d4d32eea14fbe77753677cd"} Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerDied","Data":"4e04d1dda7193b4f1ee2aabc0bf5934ec47da52e69d17c67a8c71a39edf7e20d"} Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.148704 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerDied","Data":"14c77bc787711132eb1716198943546be558491097489218c137e60554577819"} Dec 06 06:23:48 crc kubenswrapper[4958]: I1206 06:23:48.923178 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.086378 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.086625 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.086748 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.086874 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.086993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087319 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087665 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087787 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087828 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvmqx\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087896 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087930 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.087974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle\") pod \"e9711939-7159-4a5a-970f-426286de1f36\" (UID: \"e9711939-7159-4a5a-970f-426286de1f36\") " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.088695 4958 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e9711939-7159-4a5a-970f-426286de1f36-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.093222 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.102203 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle" (OuterVolumeSpecName: "secret-combined-ca-bundle") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "secret-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.110805 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out" (OuterVolumeSpecName: "config-out") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.110888 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.110939 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d" (OuterVolumeSpecName: "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.111316 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config" (OuterVolumeSpecName: "config") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.116551 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx" (OuterVolumeSpecName: "kube-api-access-cvmqx") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "kube-api-access-cvmqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.118084 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.130252 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.174084 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e9711939-7159-4a5a-970f-426286de1f36","Type":"ContainerDied","Data":"7487cdadfce1e19f5b0795cb2ddb1764507bc6b05a71d5ffa674378a488f1aff"} Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.174157 4958 scope.go:117] "RemoveContainer" containerID="39bacaa0a59e9ab7af6b7d36f6a73a2b4b83a49e2d4d32eea14fbe77753677cd" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.174352 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190160 4958 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-tls-assets\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190230 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") on node \"crc\" " Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190248 4958 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e9711939-7159-4a5a-970f-426286de1f36-config-out\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190264 4958 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190277 4958 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190290 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvmqx\" (UniqueName: \"kubernetes.io/projected/e9711939-7159-4a5a-970f-426286de1f36-kube-api-access-cvmqx\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190304 4958 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190318 4958 reconciler_common.go:293] "Volume detached for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.190330 4958 reconciler_common.go:293] "Volume detached for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-secret-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.197022 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config" (OuterVolumeSpecName: "web-config") pod "e9711939-7159-4a5a-970f-426286de1f36" (UID: "e9711939-7159-4a5a-970f-426286de1f36"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.221728 4958 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.221912 4958 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82") on node "crc" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.280820 4958 scope.go:117] "RemoveContainer" containerID="4e04d1dda7193b4f1ee2aabc0bf5934ec47da52e69d17c67a8c71a39edf7e20d" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.292395 4958 reconciler_common.go:293] "Volume detached for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.292447 4958 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e9711939-7159-4a5a-970f-426286de1f36-web-config\") on node \"crc\" DevicePath \"\"" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.303889 4958 scope.go:117] "RemoveContainer" containerID="14c77bc787711132eb1716198943546be558491097489218c137e60554577819" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.325813 4958 scope.go:117] "RemoveContainer" containerID="4de2f9851b0cd869a3b1e38b11656b9615a5705b96987fd08d1fb5de2e134abe" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.513988 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.522904 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539304 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539791 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="prometheus" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539813 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="prometheus" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539835 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="thanos-sidecar" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539845 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="thanos-sidecar" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539858 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="config-reloader" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539866 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="config-reloader" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539882 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="extract-content" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539890 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="extract-content" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539921 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="extract-utilities" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539931 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="extract-utilities" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539950 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="init-config-reloader" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539957 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="init-config-reloader" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539967 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fad70a10-a21d-4f57-b3f6-5e2349243973" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.539976 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fad70a10-a21d-4f57-b3f6-5e2349243973" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 06:23:49 crc kubenswrapper[4958]: E1206 06:23:49.539992 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="registry-server" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540000 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="registry-server" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540250 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="thanos-sidecar" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540271 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fad70a10-a21d-4f57-b3f6-5e2349243973" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540310 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="config-reloader" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540327 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9711939-7159-4a5a-970f-426286de1f36" containerName="prometheus" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.540338 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="48808921-27ec-4f30-9be8-d6fbdd088e07" containerName="registry-server" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.576329 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.589782 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.590932 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-bvqpz" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.593167 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.594233 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.594462 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.603558 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.630125 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.702033 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.702616 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.702756 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.702860 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e53cc117-7134-4aac-ba1f-3a685b98aa2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.702997 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703156 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703305 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703403 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrk56\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-kube-api-access-hrk56\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703539 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703704 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.703805 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.772882 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9711939-7159-4a5a-970f-426286de1f36" path="/var/lib/kubelet/pods/e9711939-7159-4a5a-970f-426286de1f36/volumes" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805430 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805511 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805532 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hrk56\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-kube-api-access-hrk56\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805559 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805603 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805625 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805642 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805669 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805704 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805724 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e53cc117-7134-4aac-ba1f-3a685b98aa2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.805756 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.807499 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/e53cc117-7134-4aac-ba1f-3a685b98aa2e-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.811645 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.813354 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.814464 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.814576 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.815103 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.815302 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.817667 4958 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.817750 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7e3431a5196bc2909b8ac76f6bdd967b074e179639976d5fb690f175bc86873e/globalmount\"" pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.818030 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.819289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/e53cc117-7134-4aac-ba1f-3a685b98aa2e-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.830957 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrk56\" (UniqueName: \"kubernetes.io/projected/e53cc117-7134-4aac-ba1f-3a685b98aa2e-kube-api-access-hrk56\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.855615 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89ffbb74-1bfb-49be-9c5f-526e0d1d0f82\") pod \"prometheus-metric-storage-0\" (UID: \"e53cc117-7134-4aac-ba1f-3a685b98aa2e\") " pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:49 crc kubenswrapper[4958]: I1206 06:23:49.923707 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Dec 06 06:23:50 crc kubenswrapper[4958]: I1206 06:23:50.466515 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Dec 06 06:23:51 crc kubenswrapper[4958]: I1206 06:23:51.207707 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerStarted","Data":"a225cac03c9720f88a5cddbc2b939f470d6b3d7690ef014efb760269b9903e64"} Dec 06 06:23:54 crc kubenswrapper[4958]: I1206 06:23:54.238587 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerStarted","Data":"b2c004fab159658807518b261794fe95cfeb40c875e8277f95a970a4eddfb584"} Dec 06 06:23:58 crc kubenswrapper[4958]: I1206 06:23:58.762510 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:23:58 crc kubenswrapper[4958]: E1206 06:23:58.763075 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:24:01 crc kubenswrapper[4958]: I1206 06:24:01.309575 4958 generic.go:334] "Generic (PLEG): container finished" podID="e53cc117-7134-4aac-ba1f-3a685b98aa2e" containerID="b2c004fab159658807518b261794fe95cfeb40c875e8277f95a970a4eddfb584" exitCode=0 Dec 06 06:24:01 crc kubenswrapper[4958]: I1206 06:24:01.309637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerDied","Data":"b2c004fab159658807518b261794fe95cfeb40c875e8277f95a970a4eddfb584"} Dec 06 06:24:02 crc kubenswrapper[4958]: I1206 06:24:02.321142 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerStarted","Data":"d2e13be604a5f36880d2d8eafbad9326191bd2c51171d5610fed71a6b470b791"} Dec 06 06:24:09 crc kubenswrapper[4958]: I1206 06:24:09.401778 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerStarted","Data":"8076edc0dec842c857d9f9913cbb8bc8b3460183543e4f8f0b49cbb6a3658d5b"} Dec 06 06:24:11 crc kubenswrapper[4958]: I1206 06:24:11.455717 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"e53cc117-7134-4aac-ba1f-3a685b98aa2e","Type":"ContainerStarted","Data":"2122fa1200168bfefae0905aabc64ef098dfd72c1c3cf6f7f343ab2e4cd527b2"} Dec 06 06:24:12 crc kubenswrapper[4958]: I1206 06:24:12.532854 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=23.532833032 podStartE2EDuration="23.532833032s" podCreationTimestamp="2025-12-06 06:23:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 06:24:12.528224272 +0000 UTC m=+3363.061995035" watchObservedRunningTime="2025-12-06 06:24:12.532833032 +0000 UTC m=+3363.066603785" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.565506 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.568413 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.586794 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.629027 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4n8h\" (UniqueName: \"kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.629282 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.629325 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.731324 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4n8h\" (UniqueName: \"kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.731537 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.731569 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.732154 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.732299 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.760158 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4n8h\" (UniqueName: \"kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h\") pod \"certified-operators-l497b\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.762412 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:24:13 crc kubenswrapper[4958]: E1206 06:24:13.762688 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:24:13 crc kubenswrapper[4958]: I1206 06:24:13.896243 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:14 crc kubenswrapper[4958]: I1206 06:24:14.488133 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:14 crc kubenswrapper[4958]: I1206 06:24:14.925831 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Dec 06 06:24:15 crc kubenswrapper[4958]: I1206 06:24:15.492941 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerStarted","Data":"fe47eaec71d9071750275f90d47935360f317aeb6a53fe76e9554ba7a986faf3"} Dec 06 06:24:18 crc kubenswrapper[4958]: I1206 06:24:18.532367 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerID="2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3" exitCode=0 Dec 06 06:24:18 crc kubenswrapper[4958]: I1206 06:24:18.533185 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerDied","Data":"2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3"} Dec 06 06:24:19 crc kubenswrapper[4958]: I1206 06:24:19.544771 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerStarted","Data":"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b"} Dec 06 06:24:19 crc kubenswrapper[4958]: I1206 06:24:19.925545 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Dec 06 06:24:19 crc kubenswrapper[4958]: I1206 06:24:19.931295 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Dec 06 06:24:20 crc kubenswrapper[4958]: I1206 06:24:20.559590 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerID="04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b" exitCode=0 Dec 06 06:24:20 crc kubenswrapper[4958]: I1206 06:24:20.559725 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerDied","Data":"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b"} Dec 06 06:24:20 crc kubenswrapper[4958]: I1206 06:24:20.563800 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Dec 06 06:24:21 crc kubenswrapper[4958]: I1206 06:24:21.572594 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerStarted","Data":"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2"} Dec 06 06:24:21 crc kubenswrapper[4958]: I1206 06:24:21.607985 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-l497b" podStartSLOduration=5.966048194 podStartE2EDuration="8.607960766s" podCreationTimestamp="2025-12-06 06:24:13 +0000 UTC" firstStartedPulling="2025-12-06 06:24:18.535199966 +0000 UTC m=+3369.068970729" lastFinishedPulling="2025-12-06 06:24:21.177112538 +0000 UTC m=+3371.710883301" observedRunningTime="2025-12-06 06:24:21.598203444 +0000 UTC m=+3372.131974217" watchObservedRunningTime="2025-12-06 06:24:21.607960766 +0000 UTC m=+3372.141731529" Dec 06 06:24:23 crc kubenswrapper[4958]: I1206 06:24:23.896992 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:23 crc kubenswrapper[4958]: I1206 06:24:23.897306 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:23 crc kubenswrapper[4958]: I1206 06:24:23.959010 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:28 crc kubenswrapper[4958]: I1206 06:24:28.762431 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:24:28 crc kubenswrapper[4958]: E1206 06:24:28.762967 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.832146 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.835313 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.857513 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.909744 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.909833 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:32 crc kubenswrapper[4958]: I1206 06:24:32.909878 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rz75f\" (UniqueName: \"kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.010611 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.010673 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.010701 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rz75f\" (UniqueName: \"kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.011399 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.011507 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.032008 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rz75f\" (UniqueName: \"kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f\") pod \"redhat-marketplace-q7q9l\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.170916 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.701593 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:24:33 crc kubenswrapper[4958]: I1206 06:24:33.944340 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:34 crc kubenswrapper[4958]: I1206 06:24:34.687692 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerID="0c8bdaab612d6544f72b9c3f55e2a2a5411c330356a509f9743b96fe84eefe7a" exitCode=0 Dec 06 06:24:34 crc kubenswrapper[4958]: I1206 06:24:34.687819 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerDied","Data":"0c8bdaab612d6544f72b9c3f55e2a2a5411c330356a509f9743b96fe84eefe7a"} Dec 06 06:24:34 crc kubenswrapper[4958]: I1206 06:24:34.687877 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerStarted","Data":"1939bc1ae6d89988df8048f5b6b76be58a7f9172d00b749a5db47628daeb4f83"} Dec 06 06:24:36 crc kubenswrapper[4958]: I1206 06:24:36.191249 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:36 crc kubenswrapper[4958]: I1206 06:24:36.192055 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-l497b" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="registry-server" containerID="cri-o://4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2" gracePeriod=2 Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.672397 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.725336 4958 generic.go:334] "Generic (PLEG): container finished" podID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerID="4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2" exitCode=0 Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.725377 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerDied","Data":"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2"} Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.725408 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-l497b" event={"ID":"2c0cbeb6-d59b-47a1-a198-d0148b5768d3","Type":"ContainerDied","Data":"fe47eaec71d9071750275f90d47935360f317aeb6a53fe76e9554ba7a986faf3"} Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.725424 4958 scope.go:117] "RemoveContainer" containerID="4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.725432 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-l497b" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.801239 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content\") pod \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.801304 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities\") pod \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.801417 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4n8h\" (UniqueName: \"kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h\") pod \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\" (UID: \"2c0cbeb6-d59b-47a1-a198-d0148b5768d3\") " Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.803222 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities" (OuterVolumeSpecName: "utilities") pod "2c0cbeb6-d59b-47a1-a198-d0148b5768d3" (UID: "2c0cbeb6-d59b-47a1-a198-d0148b5768d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.809669 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h" (OuterVolumeSpecName: "kube-api-access-z4n8h") pod "2c0cbeb6-d59b-47a1-a198-d0148b5768d3" (UID: "2c0cbeb6-d59b-47a1-a198-d0148b5768d3"). InnerVolumeSpecName "kube-api-access-z4n8h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.837659 4958 scope.go:117] "RemoveContainer" containerID="04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.874302 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2c0cbeb6-d59b-47a1-a198-d0148b5768d3" (UID: "2c0cbeb6-d59b-47a1-a198-d0148b5768d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.886996 4958 scope.go:117] "RemoveContainer" containerID="2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.904420 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.904455 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.904479 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4n8h\" (UniqueName: \"kubernetes.io/projected/2c0cbeb6-d59b-47a1-a198-d0148b5768d3-kube-api-access-z4n8h\") on node \"crc\" DevicePath \"\"" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.934085 4958 scope.go:117] "RemoveContainer" containerID="4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2" Dec 06 06:24:37 crc kubenswrapper[4958]: E1206 06:24:37.934752 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2\": container with ID starting with 4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2 not found: ID does not exist" containerID="4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.934804 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2"} err="failed to get container status \"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2\": rpc error: code = NotFound desc = could not find container \"4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2\": container with ID starting with 4e050a27e4cf11d0f33a314b2b3a81af6f53cf33e1ec3627b591bdce8a3cf8e2 not found: ID does not exist" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.934827 4958 scope.go:117] "RemoveContainer" containerID="04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b" Dec 06 06:24:37 crc kubenswrapper[4958]: E1206 06:24:37.935093 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b\": container with ID starting with 04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b not found: ID does not exist" containerID="04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.935126 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b"} err="failed to get container status \"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b\": rpc error: code = NotFound desc = could not find container \"04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b\": container with ID starting with 04ef8be466ccdc8d3da3964bc6dd10b0d7930047ca3857a08b8bd0686b02230b not found: ID does not exist" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.935140 4958 scope.go:117] "RemoveContainer" containerID="2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3" Dec 06 06:24:37 crc kubenswrapper[4958]: E1206 06:24:37.935427 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3\": container with ID starting with 2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3 not found: ID does not exist" containerID="2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3" Dec 06 06:24:37 crc kubenswrapper[4958]: I1206 06:24:37.935484 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3"} err="failed to get container status \"2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3\": rpc error: code = NotFound desc = could not find container \"2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3\": container with ID starting with 2506c5b7a93c69565101b079681ddb88401b150f793ccc2aad994dd7781640d3 not found: ID does not exist" Dec 06 06:24:38 crc kubenswrapper[4958]: I1206 06:24:38.065214 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:38 crc kubenswrapper[4958]: I1206 06:24:38.072637 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-l497b"] Dec 06 06:24:39 crc kubenswrapper[4958]: I1206 06:24:39.760937 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerID="66f71a26197c6b7f1bbb2c0d7cf69c95166f49cb3b68de7f53b20462c4e33285" exitCode=0 Dec 06 06:24:39 crc kubenswrapper[4958]: I1206 06:24:39.761153 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerDied","Data":"66f71a26197c6b7f1bbb2c0d7cf69c95166f49cb3b68de7f53b20462c4e33285"} Dec 06 06:24:39 crc kubenswrapper[4958]: I1206 06:24:39.777120 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" path="/var/lib/kubelet/pods/2c0cbeb6-d59b-47a1-a198-d0148b5768d3/volumes" Dec 06 06:24:40 crc kubenswrapper[4958]: I1206 06:24:40.770551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerStarted","Data":"1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132"} Dec 06 06:24:40 crc kubenswrapper[4958]: I1206 06:24:40.795221 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q7q9l" podStartSLOduration=3.30446239 podStartE2EDuration="8.795200111s" podCreationTimestamp="2025-12-06 06:24:32 +0000 UTC" firstStartedPulling="2025-12-06 06:24:34.689677426 +0000 UTC m=+3385.223448189" lastFinishedPulling="2025-12-06 06:24:40.180415157 +0000 UTC m=+3390.714185910" observedRunningTime="2025-12-06 06:24:40.788101553 +0000 UTC m=+3391.321872326" watchObservedRunningTime="2025-12-06 06:24:40.795200111 +0000 UTC m=+3391.328970874" Dec 06 06:24:43 crc kubenswrapper[4958]: I1206 06:24:43.171809 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:43 crc kubenswrapper[4958]: I1206 06:24:43.172533 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:43 crc kubenswrapper[4958]: I1206 06:24:43.243553 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:43 crc kubenswrapper[4958]: I1206 06:24:43.765743 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:24:43 crc kubenswrapper[4958]: E1206 06:24:43.766000 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.610961 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 06:24:49 crc kubenswrapper[4958]: E1206 06:24:49.611905 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="extract-content" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.611925 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="extract-content" Dec 06 06:24:49 crc kubenswrapper[4958]: E1206 06:24:49.611937 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="extract-utilities" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.611944 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="extract-utilities" Dec 06 06:24:49 crc kubenswrapper[4958]: E1206 06:24:49.611990 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="registry-server" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.611998 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="registry-server" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.612177 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c0cbeb6-d59b-47a1-a198-d0148b5768d3" containerName="registry-server" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.614267 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.616400 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.616793 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mmddf" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.618581 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.620224 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.622233 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739361 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739424 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739464 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739559 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739588 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739773 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk96n\" (UniqueName: \"kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.739992 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.740043 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.740100 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842384 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842434 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842458 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842497 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842539 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk96n\" (UniqueName: \"kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842613 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842634 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842662 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.842743 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.843536 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.843772 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.844711 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.844839 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.845228 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.856184 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.857258 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.857643 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.860935 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk96n\" (UniqueName: \"kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.873115 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"tempest-tests-tempest\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " pod="openstack/tempest-tests-tempest" Dec 06 06:24:49 crc kubenswrapper[4958]: I1206 06:24:49.945391 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 06:24:50 crc kubenswrapper[4958]: I1206 06:24:50.450728 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Dec 06 06:24:50 crc kubenswrapper[4958]: W1206 06:24:50.461160 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod333ab9e6_feb4_4ebe_8bb6_75987c261085.slice/crio-527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb WatchSource:0}: Error finding container 527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb: Status 404 returned error can't find the container with id 527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb Dec 06 06:24:50 crc kubenswrapper[4958]: I1206 06:24:50.880619 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"333ab9e6-feb4-4ebe-8bb6-75987c261085","Type":"ContainerStarted","Data":"527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb"} Dec 06 06:24:53 crc kubenswrapper[4958]: I1206 06:24:53.223222 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:24:53 crc kubenswrapper[4958]: I1206 06:24:53.282288 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:24:53 crc kubenswrapper[4958]: I1206 06:24:53.934921 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q7q9l" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="registry-server" containerID="cri-o://1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" gracePeriod=2 Dec 06 06:24:55 crc kubenswrapper[4958]: I1206 06:24:55.762243 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:24:55 crc kubenswrapper[4958]: E1206 06:24:55.762828 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:24:57 crc kubenswrapper[4958]: I1206 06:24:57.973819 4958 generic.go:334] "Generic (PLEG): container finished" podID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerID="1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" exitCode=0 Dec 06 06:24:57 crc kubenswrapper[4958]: I1206 06:24:57.974322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerDied","Data":"1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132"} Dec 06 06:25:03 crc kubenswrapper[4958]: E1206 06:25:03.171885 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132 is running failed: container process not found" containerID="1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 06:25:03 crc kubenswrapper[4958]: E1206 06:25:03.172977 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132 is running failed: container process not found" containerID="1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 06:25:03 crc kubenswrapper[4958]: E1206 06:25:03.173371 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132 is running failed: container process not found" containerID="1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 06:25:03 crc kubenswrapper[4958]: E1206 06:25:03.173406 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-q7q9l" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="registry-server" Dec 06 06:25:10 crc kubenswrapper[4958]: I1206 06:25:10.302058 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:25:10 crc kubenswrapper[4958]: E1206 06:25:10.303921 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:25:11 crc kubenswrapper[4958]: E1206 06:25:11.708761 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-tempest-all:current" Dec 06 06:25:11 crc kubenswrapper[4958]: E1206 06:25:11.709172 4958 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-tempest-all:current" Dec 06 06:25:11 crc kubenswrapper[4958]: E1206 06:25:11.709398 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-master-centos10/openstack-tempest-all:current,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk96n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(333ab9e6-feb4-4ebe-8bb6-75987c261085): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 06:25:11 crc kubenswrapper[4958]: E1206 06:25:11.710630 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="333ab9e6-feb4-4ebe-8bb6-75987c261085" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.771123 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.883837 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content\") pod \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.884036 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rz75f\" (UniqueName: \"kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f\") pod \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.884081 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities\") pod \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\" (UID: \"d7e77ec0-c569-40c0-a0e6-88e9e6701d65\") " Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.884884 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities" (OuterVolumeSpecName: "utilities") pod "d7e77ec0-c569-40c0-a0e6-88e9e6701d65" (UID: "d7e77ec0-c569-40c0-a0e6-88e9e6701d65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.890231 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f" (OuterVolumeSpecName: "kube-api-access-rz75f") pod "d7e77ec0-c569-40c0-a0e6-88e9e6701d65" (UID: "d7e77ec0-c569-40c0-a0e6-88e9e6701d65"). InnerVolumeSpecName "kube-api-access-rz75f". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.895823 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d7e77ec0-c569-40c0-a0e6-88e9e6701d65" (UID: "d7e77ec0-c569-40c0-a0e6-88e9e6701d65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.985516 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rz75f\" (UniqueName: \"kubernetes.io/projected/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-kube-api-access-rz75f\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.985546 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:11 crc kubenswrapper[4958]: I1206 06:25:11.985558 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d7e77ec0-c569-40c0-a0e6-88e9e6701d65-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.335177 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q7q9l" event={"ID":"d7e77ec0-c569-40c0-a0e6-88e9e6701d65","Type":"ContainerDied","Data":"1939bc1ae6d89988df8048f5b6b76be58a7f9172d00b749a5db47628daeb4f83"} Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.335592 4958 scope.go:117] "RemoveContainer" containerID="1022c2f56f53ca63b4e21d2282c7156a4bbc2411077fe13eb2c8e4cd6949e132" Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.335194 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q7q9l" Dec 06 06:25:12 crc kubenswrapper[4958]: E1206 06:25:12.336656 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-tempest-all:current\\\"\"" pod="openstack/tempest-tests-tempest" podUID="333ab9e6-feb4-4ebe-8bb6-75987c261085" Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.379811 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.386081 4958 scope.go:117] "RemoveContainer" containerID="66f71a26197c6b7f1bbb2c0d7cf69c95166f49cb3b68de7f53b20462c4e33285" Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.393799 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q7q9l"] Dec 06 06:25:12 crc kubenswrapper[4958]: I1206 06:25:12.407034 4958 scope.go:117] "RemoveContainer" containerID="0c8bdaab612d6544f72b9c3f55e2a2a5411c330356a509f9743b96fe84eefe7a" Dec 06 06:25:13 crc kubenswrapper[4958]: I1206 06:25:13.779073 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" path="/var/lib/kubelet/pods/d7e77ec0-c569-40c0-a0e6-88e9e6701d65/volumes" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.964031 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:20 crc kubenswrapper[4958]: E1206 06:25:20.965019 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="registry-server" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.965032 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="registry-server" Dec 06 06:25:20 crc kubenswrapper[4958]: E1206 06:25:20.965069 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="extract-utilities" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.965076 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="extract-utilities" Dec 06 06:25:20 crc kubenswrapper[4958]: E1206 06:25:20.965091 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="extract-content" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.965096 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="extract-content" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.965296 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e77ec0-c569-40c0-a0e6-88e9e6701d65" containerName="registry-server" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.966681 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.986793 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4r4\" (UniqueName: \"kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.988402 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.988776 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:20 crc kubenswrapper[4958]: I1206 06:25:20.987346 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.090377 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4r4\" (UniqueName: \"kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.091119 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.091789 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.091906 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.092231 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.125135 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4r4\" (UniqueName: \"kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4\") pod \"redhat-operators-vg5bz\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.299589 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:21 crc kubenswrapper[4958]: I1206 06:25:21.752438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:22 crc kubenswrapper[4958]: I1206 06:25:22.448351 4958 generic.go:334] "Generic (PLEG): container finished" podID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerID="08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c" exitCode=0 Dec 06 06:25:22 crc kubenswrapper[4958]: I1206 06:25:22.448402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerDied","Data":"08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c"} Dec 06 06:25:22 crc kubenswrapper[4958]: I1206 06:25:22.448692 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerStarted","Data":"06b016d7698940e28deaa7af2c2352dded0caf7291c11a67668abe55b58abb1c"} Dec 06 06:25:22 crc kubenswrapper[4958]: I1206 06:25:22.762395 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:25:22 crc kubenswrapper[4958]: E1206 06:25:22.763176 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:25:24 crc kubenswrapper[4958]: I1206 06:25:24.474278 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerStarted","Data":"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57"} Dec 06 06:25:30 crc kubenswrapper[4958]: I1206 06:25:30.244671 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Dec 06 06:25:30 crc kubenswrapper[4958]: I1206 06:25:30.563850 4958 generic.go:334] "Generic (PLEG): container finished" podID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerID="27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57" exitCode=0 Dec 06 06:25:30 crc kubenswrapper[4958]: I1206 06:25:30.564172 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerDied","Data":"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57"} Dec 06 06:25:31 crc kubenswrapper[4958]: I1206 06:25:31.576186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"333ab9e6-feb4-4ebe-8bb6-75987c261085","Type":"ContainerStarted","Data":"467e130fc7ac62261477e86dcc16fce19a8c9c04fbacb57f6afa58fc4906cd97"} Dec 06 06:25:31 crc kubenswrapper[4958]: I1206 06:25:31.610020 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.831484152 podStartE2EDuration="43.60994597s" podCreationTimestamp="2025-12-06 06:24:48 +0000 UTC" firstStartedPulling="2025-12-06 06:24:50.46358973 +0000 UTC m=+3400.997360503" lastFinishedPulling="2025-12-06 06:25:30.242051568 +0000 UTC m=+3440.775822321" observedRunningTime="2025-12-06 06:25:31.606589585 +0000 UTC m=+3442.140360348" watchObservedRunningTime="2025-12-06 06:25:31.60994597 +0000 UTC m=+3442.143716733" Dec 06 06:25:34 crc kubenswrapper[4958]: I1206 06:25:34.761921 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:25:34 crc kubenswrapper[4958]: E1206 06:25:34.763004 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:25:35 crc kubenswrapper[4958]: I1206 06:25:35.637316 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerStarted","Data":"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb"} Dec 06 06:25:35 crc kubenswrapper[4958]: I1206 06:25:35.657913 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vg5bz" podStartSLOduration=3.096642879 podStartE2EDuration="15.657893847s" podCreationTimestamp="2025-12-06 06:25:20 +0000 UTC" firstStartedPulling="2025-12-06 06:25:22.45017528 +0000 UTC m=+3432.983946083" lastFinishedPulling="2025-12-06 06:25:35.011426288 +0000 UTC m=+3445.545197051" observedRunningTime="2025-12-06 06:25:35.652897677 +0000 UTC m=+3446.186668530" watchObservedRunningTime="2025-12-06 06:25:35.657893847 +0000 UTC m=+3446.191664610" Dec 06 06:25:41 crc kubenswrapper[4958]: I1206 06:25:41.300113 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:41 crc kubenswrapper[4958]: I1206 06:25:41.300662 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:42 crc kubenswrapper[4958]: I1206 06:25:42.361375 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vg5bz" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="registry-server" probeResult="failure" output=< Dec 06 06:25:42 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 06:25:42 crc kubenswrapper[4958]: > Dec 06 06:25:48 crc kubenswrapper[4958]: I1206 06:25:48.762268 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:25:50 crc kubenswrapper[4958]: I1206 06:25:50.790243 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e"} Dec 06 06:25:51 crc kubenswrapper[4958]: I1206 06:25:51.351604 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:51 crc kubenswrapper[4958]: I1206 06:25:51.412845 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:52 crc kubenswrapper[4958]: I1206 06:25:52.155285 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:52 crc kubenswrapper[4958]: I1206 06:25:52.810331 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vg5bz" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="registry-server" containerID="cri-o://325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb" gracePeriod=2 Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.340902 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.384544 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7r4r4\" (UniqueName: \"kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4\") pod \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.384611 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content\") pod \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.384738 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities\") pod \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\" (UID: \"c008ee5b-dd48-4dd8-a528-3fd3844314ca\") " Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.385747 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities" (OuterVolumeSpecName: "utilities") pod "c008ee5b-dd48-4dd8-a528-3fd3844314ca" (UID: "c008ee5b-dd48-4dd8-a528-3fd3844314ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.391344 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4" (OuterVolumeSpecName: "kube-api-access-7r4r4") pod "c008ee5b-dd48-4dd8-a528-3fd3844314ca" (UID: "c008ee5b-dd48-4dd8-a528-3fd3844314ca"). InnerVolumeSpecName "kube-api-access-7r4r4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.477253 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c008ee5b-dd48-4dd8-a528-3fd3844314ca" (UID: "c008ee5b-dd48-4dd8-a528-3fd3844314ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.487024 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7r4r4\" (UniqueName: \"kubernetes.io/projected/c008ee5b-dd48-4dd8-a528-3fd3844314ca-kube-api-access-7r4r4\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.487067 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.487078 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c008ee5b-dd48-4dd8-a528-3fd3844314ca-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.823096 4958 generic.go:334] "Generic (PLEG): container finished" podID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerID="325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb" exitCode=0 Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.823147 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerDied","Data":"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb"} Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.823208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vg5bz" event={"ID":"c008ee5b-dd48-4dd8-a528-3fd3844314ca","Type":"ContainerDied","Data":"06b016d7698940e28deaa7af2c2352dded0caf7291c11a67668abe55b58abb1c"} Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.823230 4958 scope.go:117] "RemoveContainer" containerID="325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.823381 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vg5bz" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.852009 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.855135 4958 scope.go:117] "RemoveContainer" containerID="27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.863284 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vg5bz"] Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.878365 4958 scope.go:117] "RemoveContainer" containerID="08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.923984 4958 scope.go:117] "RemoveContainer" containerID="325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb" Dec 06 06:25:53 crc kubenswrapper[4958]: E1206 06:25:53.924648 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb\": container with ID starting with 325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb not found: ID does not exist" containerID="325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.924701 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb"} err="failed to get container status \"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb\": rpc error: code = NotFound desc = could not find container \"325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb\": container with ID starting with 325a9a6ea0389952bda9e836f8ec1ae7bd3c10e9f3e2dee8f0b2e69b674faadb not found: ID does not exist" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.924735 4958 scope.go:117] "RemoveContainer" containerID="27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57" Dec 06 06:25:53 crc kubenswrapper[4958]: E1206 06:25:53.924995 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57\": container with ID starting with 27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57 not found: ID does not exist" containerID="27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.925021 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57"} err="failed to get container status \"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57\": rpc error: code = NotFound desc = could not find container \"27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57\": container with ID starting with 27d1fd1fd4cb90ffe2e079a0286574b025eee6a472b5a856d9a2a559ba76cf57 not found: ID does not exist" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.925041 4958 scope.go:117] "RemoveContainer" containerID="08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c" Dec 06 06:25:53 crc kubenswrapper[4958]: E1206 06:25:53.925405 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c\": container with ID starting with 08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c not found: ID does not exist" containerID="08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c" Dec 06 06:25:53 crc kubenswrapper[4958]: I1206 06:25:53.925437 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c"} err="failed to get container status \"08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c\": rpc error: code = NotFound desc = could not find container \"08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c\": container with ID starting with 08b0dbdba073a01f05eabbd73f089564f8f15e07f8ace8c7ed95b8b62719529c not found: ID does not exist" Dec 06 06:25:55 crc kubenswrapper[4958]: I1206 06:25:55.775895 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" path="/var/lib/kubelet/pods/c008ee5b-dd48-4dd8-a528-3fd3844314ca/volumes" Dec 06 06:28:09 crc kubenswrapper[4958]: I1206 06:28:09.866751 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:28:09 crc kubenswrapper[4958]: I1206 06:28:09.868443 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:28:39 crc kubenswrapper[4958]: I1206 06:28:39.866631 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:28:39 crc kubenswrapper[4958]: I1206 06:28:39.867278 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:29:09 crc kubenswrapper[4958]: I1206 06:29:09.866538 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:29:09 crc kubenswrapper[4958]: I1206 06:29:09.867080 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:29:09 crc kubenswrapper[4958]: I1206 06:29:09.867131 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:29:09 crc kubenswrapper[4958]: I1206 06:29:09.867940 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:29:09 crc kubenswrapper[4958]: I1206 06:29:09.867998 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e" gracePeriod=600 Dec 06 06:29:10 crc kubenswrapper[4958]: I1206 06:29:10.399350 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e" exitCode=0 Dec 06 06:29:10 crc kubenswrapper[4958]: I1206 06:29:10.399399 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e"} Dec 06 06:29:10 crc kubenswrapper[4958]: I1206 06:29:10.399440 4958 scope.go:117] "RemoveContainer" containerID="e5f89a3f3ebb3189d7b7afed5d40ed149c6033267fe638ed5096dc97104ae6ad" Dec 06 06:29:13 crc kubenswrapper[4958]: I1206 06:29:13.438018 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3"} Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.150828 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz"] Dec 06 06:30:00 crc kubenswrapper[4958]: E1206 06:30:00.152963 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="extract-utilities" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.153066 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="extract-utilities" Dec 06 06:30:00 crc kubenswrapper[4958]: E1206 06:30:00.153165 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="registry-server" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.153235 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="registry-server" Dec 06 06:30:00 crc kubenswrapper[4958]: E1206 06:30:00.153307 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="extract-content" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.153372 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="extract-content" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.153681 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c008ee5b-dd48-4dd8-a528-3fd3844314ca" containerName="registry-server" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.154675 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.156946 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.157023 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.164800 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz"] Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.255764 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.256028 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.256115 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xwxk\" (UniqueName: \"kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.358910 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.359060 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xwxk\" (UniqueName: \"kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.359501 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.359920 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.366934 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.379586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xwxk\" (UniqueName: \"kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk\") pod \"collect-profiles-29416710-4t8hz\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.490573 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:00 crc kubenswrapper[4958]: I1206 06:30:00.907198 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz"] Dec 06 06:30:01 crc kubenswrapper[4958]: I1206 06:30:01.934317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" event={"ID":"26460c44-cd78-4d4c-b9fe-8dace0fba04b","Type":"ContainerStarted","Data":"72871d25b4f6e686f46373e08de5eb9e382a8d0dcb8e59df4be0c7eb6cb533ab"} Dec 06 06:30:01 crc kubenswrapper[4958]: I1206 06:30:01.934937 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" event={"ID":"26460c44-cd78-4d4c-b9fe-8dace0fba04b","Type":"ContainerStarted","Data":"f1e9db978f8cb6caec445f38dfd683e9f65dbfda3994350ff3f9f5d653e61ac4"} Dec 06 06:30:02 crc kubenswrapper[4958]: I1206 06:30:02.945561 4958 generic.go:334] "Generic (PLEG): container finished" podID="26460c44-cd78-4d4c-b9fe-8dace0fba04b" containerID="72871d25b4f6e686f46373e08de5eb9e382a8d0dcb8e59df4be0c7eb6cb533ab" exitCode=0 Dec 06 06:30:02 crc kubenswrapper[4958]: I1206 06:30:02.945626 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" event={"ID":"26460c44-cd78-4d4c-b9fe-8dace0fba04b","Type":"ContainerDied","Data":"72871d25b4f6e686f46373e08de5eb9e382a8d0dcb8e59df4be0c7eb6cb533ab"} Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.393981 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.447402 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xwxk\" (UniqueName: \"kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk\") pod \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.447511 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume\") pod \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.447796 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume\") pod \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\" (UID: \"26460c44-cd78-4d4c-b9fe-8dace0fba04b\") " Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.448469 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume" (OuterVolumeSpecName: "config-volume") pod "26460c44-cd78-4d4c-b9fe-8dace0fba04b" (UID: "26460c44-cd78-4d4c-b9fe-8dace0fba04b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.454899 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "26460c44-cd78-4d4c-b9fe-8dace0fba04b" (UID: "26460c44-cd78-4d4c-b9fe-8dace0fba04b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.455770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk" (OuterVolumeSpecName: "kube-api-access-9xwxk") pod "26460c44-cd78-4d4c-b9fe-8dace0fba04b" (UID: "26460c44-cd78-4d4c-b9fe-8dace0fba04b"). InnerVolumeSpecName "kube-api-access-9xwxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.550626 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xwxk\" (UniqueName: \"kubernetes.io/projected/26460c44-cd78-4d4c-b9fe-8dace0fba04b-kube-api-access-9xwxk\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.550682 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26460c44-cd78-4d4c-b9fe-8dace0fba04b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.550706 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/26460c44-cd78-4d4c-b9fe-8dace0fba04b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.970146 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" event={"ID":"26460c44-cd78-4d4c-b9fe-8dace0fba04b","Type":"ContainerDied","Data":"f1e9db978f8cb6caec445f38dfd683e9f65dbfda3994350ff3f9f5d653e61ac4"} Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.970532 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1e9db978f8cb6caec445f38dfd683e9f65dbfda3994350ff3f9f5d653e61ac4" Dec 06 06:30:04 crc kubenswrapper[4958]: I1206 06:30:04.970214 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz" Dec 06 06:30:05 crc kubenswrapper[4958]: I1206 06:30:05.481165 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht"] Dec 06 06:30:05 crc kubenswrapper[4958]: I1206 06:30:05.490971 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416665-smrht"] Dec 06 06:30:05 crc kubenswrapper[4958]: I1206 06:30:05.778757 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab4d38c-3380-4f4f-90d6-286ad61d6067" path="/var/lib/kubelet/pods/3ab4d38c-3380-4f4f-90d6-286ad61d6067/volumes" Dec 06 06:30:32 crc kubenswrapper[4958]: I1206 06:30:32.097024 4958 scope.go:117] "RemoveContainer" containerID="846fac533291e6ed06a1bea832f228340b17d1442d0ad64e44d93ca99f719d20" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.226033 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:30:55 crc kubenswrapper[4958]: E1206 06:30:55.227697 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26460c44-cd78-4d4c-b9fe-8dace0fba04b" containerName="collect-profiles" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.227731 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26460c44-cd78-4d4c-b9fe-8dace0fba04b" containerName="collect-profiles" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.228246 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="26460c44-cd78-4d4c-b9fe-8dace0fba04b" containerName="collect-profiles" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.231123 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.241660 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.386834 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.387025 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbb5r\" (UniqueName: \"kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.387077 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.495703 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbb5r\" (UniqueName: \"kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.495813 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.496282 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.496406 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.496701 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.522707 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbb5r\" (UniqueName: \"kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r\") pod \"community-operators-cb2qc\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:55 crc kubenswrapper[4958]: I1206 06:30:55.561025 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:30:56 crc kubenswrapper[4958]: I1206 06:30:56.100283 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:30:56 crc kubenswrapper[4958]: W1206 06:30:56.110977 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea876b1c_e37f_4d45_8925_908e013acb62.slice/crio-b798ef061aae00df8ac167dda679211ef8de4696ae419bcfa6cd8b349482ba43 WatchSource:0}: Error finding container b798ef061aae00df8ac167dda679211ef8de4696ae419bcfa6cd8b349482ba43: Status 404 returned error can't find the container with id b798ef061aae00df8ac167dda679211ef8de4696ae419bcfa6cd8b349482ba43 Dec 06 06:30:56 crc kubenswrapper[4958]: I1206 06:30:56.527509 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea876b1c-e37f-4d45-8925-908e013acb62" containerID="2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e" exitCode=0 Dec 06 06:30:56 crc kubenswrapper[4958]: I1206 06:30:56.527561 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerDied","Data":"2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e"} Dec 06 06:30:56 crc kubenswrapper[4958]: I1206 06:30:56.527591 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerStarted","Data":"b798ef061aae00df8ac167dda679211ef8de4696ae419bcfa6cd8b349482ba43"} Dec 06 06:30:56 crc kubenswrapper[4958]: I1206 06:30:56.531152 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:30:58 crc kubenswrapper[4958]: I1206 06:30:58.553079 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerStarted","Data":"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98"} Dec 06 06:30:59 crc kubenswrapper[4958]: I1206 06:30:59.564346 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea876b1c-e37f-4d45-8925-908e013acb62" containerID="f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98" exitCode=0 Dec 06 06:30:59 crc kubenswrapper[4958]: I1206 06:30:59.564393 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerDied","Data":"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98"} Dec 06 06:31:00 crc kubenswrapper[4958]: I1206 06:31:00.578059 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerStarted","Data":"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f"} Dec 06 06:31:00 crc kubenswrapper[4958]: I1206 06:31:00.608518 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cb2qc" podStartSLOduration=2.066533107 podStartE2EDuration="5.60849639s" podCreationTimestamp="2025-12-06 06:30:55 +0000 UTC" firstStartedPulling="2025-12-06 06:30:56.529226307 +0000 UTC m=+3767.062997070" lastFinishedPulling="2025-12-06 06:31:00.07118959 +0000 UTC m=+3770.604960353" observedRunningTime="2025-12-06 06:31:00.598341526 +0000 UTC m=+3771.132112329" watchObservedRunningTime="2025-12-06 06:31:00.60849639 +0000 UTC m=+3771.142267153" Dec 06 06:31:05 crc kubenswrapper[4958]: I1206 06:31:05.561627 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:05 crc kubenswrapper[4958]: I1206 06:31:05.561973 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:05 crc kubenswrapper[4958]: I1206 06:31:05.610631 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:05 crc kubenswrapper[4958]: I1206 06:31:05.683221 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:05 crc kubenswrapper[4958]: I1206 06:31:05.852324 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:31:07 crc kubenswrapper[4958]: I1206 06:31:07.637317 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cb2qc" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="registry-server" containerID="cri-o://c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f" gracePeriod=2 Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.143659 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.273337 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities\") pod \"ea876b1c-e37f-4d45-8925-908e013acb62\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.273437 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content\") pod \"ea876b1c-e37f-4d45-8925-908e013acb62\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.273557 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbb5r\" (UniqueName: \"kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r\") pod \"ea876b1c-e37f-4d45-8925-908e013acb62\" (UID: \"ea876b1c-e37f-4d45-8925-908e013acb62\") " Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.274451 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities" (OuterVolumeSpecName: "utilities") pod "ea876b1c-e37f-4d45-8925-908e013acb62" (UID: "ea876b1c-e37f-4d45-8925-908e013acb62"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.279369 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r" (OuterVolumeSpecName: "kube-api-access-fbb5r") pod "ea876b1c-e37f-4d45-8925-908e013acb62" (UID: "ea876b1c-e37f-4d45-8925-908e013acb62"). InnerVolumeSpecName "kube-api-access-fbb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.325257 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ea876b1c-e37f-4d45-8925-908e013acb62" (UID: "ea876b1c-e37f-4d45-8925-908e013acb62"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.376134 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.376177 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ea876b1c-e37f-4d45-8925-908e013acb62-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.376190 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbb5r\" (UniqueName: \"kubernetes.io/projected/ea876b1c-e37f-4d45-8925-908e013acb62-kube-api-access-fbb5r\") on node \"crc\" DevicePath \"\"" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.656011 4958 generic.go:334] "Generic (PLEG): container finished" podID="ea876b1c-e37f-4d45-8925-908e013acb62" containerID="c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f" exitCode=0 Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.656068 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerDied","Data":"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f"} Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.656114 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cb2qc" event={"ID":"ea876b1c-e37f-4d45-8925-908e013acb62","Type":"ContainerDied","Data":"b798ef061aae00df8ac167dda679211ef8de4696ae419bcfa6cd8b349482ba43"} Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.656145 4958 scope.go:117] "RemoveContainer" containerID="c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.656657 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cb2qc" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.692801 4958 scope.go:117] "RemoveContainer" containerID="f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.703460 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.715162 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cb2qc"] Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.726203 4958 scope.go:117] "RemoveContainer" containerID="2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.781570 4958 scope.go:117] "RemoveContainer" containerID="c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f" Dec 06 06:31:08 crc kubenswrapper[4958]: E1206 06:31:08.782133 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f\": container with ID starting with c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f not found: ID does not exist" containerID="c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.782195 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f"} err="failed to get container status \"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f\": rpc error: code = NotFound desc = could not find container \"c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f\": container with ID starting with c0e667b0f75367900ef91737482ac03f38771a8017d6f661c34e90804e640b3f not found: ID does not exist" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.782227 4958 scope.go:117] "RemoveContainer" containerID="f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98" Dec 06 06:31:08 crc kubenswrapper[4958]: E1206 06:31:08.782672 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98\": container with ID starting with f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98 not found: ID does not exist" containerID="f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.782741 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98"} err="failed to get container status \"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98\": rpc error: code = NotFound desc = could not find container \"f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98\": container with ID starting with f8c92f2ebd238d85429f39bf28e99bf6f5e1a35dbbb1b5a0bf1b140b8fdf2f98 not found: ID does not exist" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.782783 4958 scope.go:117] "RemoveContainer" containerID="2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e" Dec 06 06:31:08 crc kubenswrapper[4958]: E1206 06:31:08.783126 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e\": container with ID starting with 2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e not found: ID does not exist" containerID="2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e" Dec 06 06:31:08 crc kubenswrapper[4958]: I1206 06:31:08.783172 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e"} err="failed to get container status \"2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e\": rpc error: code = NotFound desc = could not find container \"2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e\": container with ID starting with 2079adc90d55eb2eff81e3c625aa3c1b0f8fa0fc673f7bc63eb4ab9ac08ef21e not found: ID does not exist" Dec 06 06:31:09 crc kubenswrapper[4958]: I1206 06:31:09.775910 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" path="/var/lib/kubelet/pods/ea876b1c-e37f-4d45-8925-908e013acb62/volumes" Dec 06 06:31:39 crc kubenswrapper[4958]: I1206 06:31:39.866038 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:31:39 crc kubenswrapper[4958]: I1206 06:31:39.866431 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:32:09 crc kubenswrapper[4958]: I1206 06:32:09.866031 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:32:09 crc kubenswrapper[4958]: I1206 06:32:09.867765 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:32:39 crc kubenswrapper[4958]: I1206 06:32:39.865901 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:32:39 crc kubenswrapper[4958]: I1206 06:32:39.866353 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:32:39 crc kubenswrapper[4958]: I1206 06:32:39.866391 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:32:39 crc kubenswrapper[4958]: I1206 06:32:39.867136 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:32:39 crc kubenswrapper[4958]: I1206 06:32:39.867191 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" gracePeriod=600 Dec 06 06:32:39 crc kubenswrapper[4958]: E1206 06:32:39.988277 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:32:40 crc kubenswrapper[4958]: I1206 06:32:40.579812 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" exitCode=0 Dec 06 06:32:40 crc kubenswrapper[4958]: I1206 06:32:40.579900 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3"} Dec 06 06:32:40 crc kubenswrapper[4958]: I1206 06:32:40.580152 4958 scope.go:117] "RemoveContainer" containerID="ecd78cbbb51041d906a6569c001623f2d46419c071cc8bef423041ee3d754b4e" Dec 06 06:32:40 crc kubenswrapper[4958]: I1206 06:32:40.580799 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:32:40 crc kubenswrapper[4958]: E1206 06:32:40.581030 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:32:50 crc kubenswrapper[4958]: I1206 06:32:50.762701 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:32:50 crc kubenswrapper[4958]: E1206 06:32:50.763971 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:33:05 crc kubenswrapper[4958]: I1206 06:33:05.762939 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:33:05 crc kubenswrapper[4958]: E1206 06:33:05.765864 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:33:16 crc kubenswrapper[4958]: I1206 06:33:16.127738 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-588cbd45c9-xblwx" podUID="aae69e62-83f7-47d4-aecd-e883ed84a6ac" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Dec 06 06:33:19 crc kubenswrapper[4958]: I1206 06:33:19.768870 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:33:19 crc kubenswrapper[4958]: E1206 06:33:19.769631 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:33:33 crc kubenswrapper[4958]: I1206 06:33:33.762236 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:33:33 crc kubenswrapper[4958]: E1206 06:33:33.763578 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:33:44 crc kubenswrapper[4958]: I1206 06:33:44.761801 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:33:44 crc kubenswrapper[4958]: E1206 06:33:44.762817 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:33:58 crc kubenswrapper[4958]: I1206 06:33:58.762359 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:33:58 crc kubenswrapper[4958]: E1206 06:33:58.763077 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:34:09 crc kubenswrapper[4958]: I1206 06:34:09.773204 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:34:09 crc kubenswrapper[4958]: E1206 06:34:09.774305 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:34:12 crc kubenswrapper[4958]: I1206 06:34:12.734927 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 06 06:34:14 crc kubenswrapper[4958]: I1206 06:34:14.247759 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-pvdb2" podUID="f44e552e-a8cb-4abf-bb5c-cfbde43b518b" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:34:20 crc kubenswrapper[4958]: I1206 06:34:20.762993 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:34:20 crc kubenswrapper[4958]: E1206 06:34:20.764422 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:34:34 crc kubenswrapper[4958]: I1206 06:34:34.762744 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:34:34 crc kubenswrapper[4958]: E1206 06:34:34.763917 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:34:49 crc kubenswrapper[4958]: I1206 06:34:49.772281 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:34:49 crc kubenswrapper[4958]: E1206 06:34:49.773062 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.421038 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:34:55 crc kubenswrapper[4958]: E1206 06:34:55.422035 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="registry-server" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.422052 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="registry-server" Dec 06 06:34:55 crc kubenswrapper[4958]: E1206 06:34:55.422070 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="extract-content" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.422078 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="extract-content" Dec 06 06:34:55 crc kubenswrapper[4958]: E1206 06:34:55.422096 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="extract-utilities" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.422103 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="extract-utilities" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.422293 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea876b1c-e37f-4d45-8925-908e013acb62" containerName="registry-server" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.423990 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.436159 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.555573 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck795\" (UniqueName: \"kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.555653 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.556197 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.658864 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.658985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ck795\" (UniqueName: \"kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.659014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.659465 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.659596 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.681264 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ck795\" (UniqueName: \"kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795\") pod \"redhat-marketplace-5mchd\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:55 crc kubenswrapper[4958]: I1206 06:34:55.750896 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:34:56 crc kubenswrapper[4958]: I1206 06:34:56.311315 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:34:56 crc kubenswrapper[4958]: I1206 06:34:56.996442 4958 generic.go:334] "Generic (PLEG): container finished" podID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerID="a84bc77bb13bc89fdf4505c1afe83097fcaa0b4d6f92f836fc0833b280e4e7a2" exitCode=0 Dec 06 06:34:56 crc kubenswrapper[4958]: I1206 06:34:56.996510 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerDied","Data":"a84bc77bb13bc89fdf4505c1afe83097fcaa0b4d6f92f836fc0833b280e4e7a2"} Dec 06 06:34:56 crc kubenswrapper[4958]: I1206 06:34:56.996743 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerStarted","Data":"9be8d0cfecdfc11cb24d8b17aeeccbeee8a3953092493bcbb7777c1abd45bedd"} Dec 06 06:34:59 crc kubenswrapper[4958]: I1206 06:34:59.019938 4958 generic.go:334] "Generic (PLEG): container finished" podID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerID="50e1f97e10990abd4c9fdb5a59ac7aa8f8b4c548455170c94ad70501601524d0" exitCode=0 Dec 06 06:34:59 crc kubenswrapper[4958]: I1206 06:34:59.020032 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerDied","Data":"50e1f97e10990abd4c9fdb5a59ac7aa8f8b4c548455170c94ad70501601524d0"} Dec 06 06:35:01 crc kubenswrapper[4958]: I1206 06:35:01.044892 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerStarted","Data":"7220eef2cdbfb209bf405f56c0eefe5b3d4e6ed9760fe9530aebe77f11fe3c98"} Dec 06 06:35:01 crc kubenswrapper[4958]: I1206 06:35:01.074047 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5mchd" podStartSLOduration=3.187203784 podStartE2EDuration="6.074010091s" podCreationTimestamp="2025-12-06 06:34:55 +0000 UTC" firstStartedPulling="2025-12-06 06:34:56.998215491 +0000 UTC m=+4007.531986254" lastFinishedPulling="2025-12-06 06:34:59.885021758 +0000 UTC m=+4010.418792561" observedRunningTime="2025-12-06 06:35:01.060692232 +0000 UTC m=+4011.594463005" watchObservedRunningTime="2025-12-06 06:35:01.074010091 +0000 UTC m=+4011.607780864" Dec 06 06:35:02 crc kubenswrapper[4958]: I1206 06:35:02.762453 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:35:02 crc kubenswrapper[4958]: E1206 06:35:02.763155 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:35:05 crc kubenswrapper[4958]: I1206 06:35:05.751317 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:05 crc kubenswrapper[4958]: I1206 06:35:05.752002 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:05 crc kubenswrapper[4958]: I1206 06:35:05.816718 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:06 crc kubenswrapper[4958]: I1206 06:35:06.137627 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:06 crc kubenswrapper[4958]: I1206 06:35:06.179333 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:35:08 crc kubenswrapper[4958]: I1206 06:35:08.107963 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5mchd" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="registry-server" containerID="cri-o://7220eef2cdbfb209bf405f56c0eefe5b3d4e6ed9760fe9530aebe77f11fe3c98" gracePeriod=2 Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.118778 4958 generic.go:334] "Generic (PLEG): container finished" podID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerID="7220eef2cdbfb209bf405f56c0eefe5b3d4e6ed9760fe9530aebe77f11fe3c98" exitCode=0 Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.118829 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerDied","Data":"7220eef2cdbfb209bf405f56c0eefe5b3d4e6ed9760fe9530aebe77f11fe3c98"} Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.464702 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.556442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities\") pod \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.556917 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ck795\" (UniqueName: \"kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795\") pod \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.556969 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content\") pod \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\" (UID: \"21183ab7-45f6-43f6-9f09-51dc2c7f2964\") " Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.557590 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities" (OuterVolumeSpecName: "utilities") pod "21183ab7-45f6-43f6-9f09-51dc2c7f2964" (UID: "21183ab7-45f6-43f6-9f09-51dc2c7f2964"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.558597 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.565809 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795" (OuterVolumeSpecName: "kube-api-access-ck795") pod "21183ab7-45f6-43f6-9f09-51dc2c7f2964" (UID: "21183ab7-45f6-43f6-9f09-51dc2c7f2964"). InnerVolumeSpecName "kube-api-access-ck795". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.587075 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21183ab7-45f6-43f6-9f09-51dc2c7f2964" (UID: "21183ab7-45f6-43f6-9f09-51dc2c7f2964"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.660362 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ck795\" (UniqueName: \"kubernetes.io/projected/21183ab7-45f6-43f6-9f09-51dc2c7f2964-kube-api-access-ck795\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:09 crc kubenswrapper[4958]: I1206 06:35:09.660432 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21183ab7-45f6-43f6-9f09-51dc2c7f2964-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.131451 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5mchd" event={"ID":"21183ab7-45f6-43f6-9f09-51dc2c7f2964","Type":"ContainerDied","Data":"9be8d0cfecdfc11cb24d8b17aeeccbeee8a3953092493bcbb7777c1abd45bedd"} Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.131574 4958 scope.go:117] "RemoveContainer" containerID="7220eef2cdbfb209bf405f56c0eefe5b3d4e6ed9760fe9530aebe77f11fe3c98" Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.131646 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5mchd" Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.158822 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.166932 4958 scope.go:117] "RemoveContainer" containerID="50e1f97e10990abd4c9fdb5a59ac7aa8f8b4c548455170c94ad70501601524d0" Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.169169 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5mchd"] Dec 06 06:35:10 crc kubenswrapper[4958]: I1206 06:35:10.191297 4958 scope.go:117] "RemoveContainer" containerID="a84bc77bb13bc89fdf4505c1afe83097fcaa0b4d6f92f836fc0833b280e4e7a2" Dec 06 06:35:11 crc kubenswrapper[4958]: I1206 06:35:11.772958 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" path="/var/lib/kubelet/pods/21183ab7-45f6-43f6-9f09-51dc2c7f2964/volumes" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.563464 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:13 crc kubenswrapper[4958]: E1206 06:35:13.565167 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="extract-utilities" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.565214 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="extract-utilities" Dec 06 06:35:13 crc kubenswrapper[4958]: E1206 06:35:13.565281 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="registry-server" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.565290 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="registry-server" Dec 06 06:35:13 crc kubenswrapper[4958]: E1206 06:35:13.565310 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="extract-content" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.565319 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="extract-content" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.565681 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="21183ab7-45f6-43f6-9f09-51dc2c7f2964" containerName="registry-server" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.569442 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.592440 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.639344 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.639442 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2fw\" (UniqueName: \"kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.639497 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.740989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.741092 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jg2fw\" (UniqueName: \"kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.741111 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.741487 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.741521 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.778099 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jg2fw\" (UniqueName: \"kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw\") pod \"certified-operators-2bgl9\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:13 crc kubenswrapper[4958]: I1206 06:35:13.904766 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:14 crc kubenswrapper[4958]: I1206 06:35:14.403262 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:15 crc kubenswrapper[4958]: I1206 06:35:15.192236 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerStarted","Data":"bd830efb39455a772d8e8f0d5c24e51752f9d64b0fe1d9172c55d42fa87a9b56"} Dec 06 06:35:16 crc kubenswrapper[4958]: I1206 06:35:16.207826 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerStarted","Data":"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e"} Dec 06 06:35:17 crc kubenswrapper[4958]: I1206 06:35:17.219455 4958 generic.go:334] "Generic (PLEG): container finished" podID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerID="c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e" exitCode=0 Dec 06 06:35:17 crc kubenswrapper[4958]: I1206 06:35:17.219526 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerDied","Data":"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e"} Dec 06 06:35:17 crc kubenswrapper[4958]: I1206 06:35:17.762196 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:35:17 crc kubenswrapper[4958]: E1206 06:35:17.762823 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:35:19 crc kubenswrapper[4958]: I1206 06:35:19.241685 4958 generic.go:334] "Generic (PLEG): container finished" podID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerID="10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be" exitCode=0 Dec 06 06:35:19 crc kubenswrapper[4958]: I1206 06:35:19.241802 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerDied","Data":"10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be"} Dec 06 06:35:20 crc kubenswrapper[4958]: I1206 06:35:20.255025 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerStarted","Data":"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c"} Dec 06 06:35:20 crc kubenswrapper[4958]: I1206 06:35:20.284855 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2bgl9" podStartSLOduration=4.861100842 podStartE2EDuration="7.28483632s" podCreationTimestamp="2025-12-06 06:35:13 +0000 UTC" firstStartedPulling="2025-12-06 06:35:17.221455748 +0000 UTC m=+4027.755226521" lastFinishedPulling="2025-12-06 06:35:19.645191226 +0000 UTC m=+4030.178961999" observedRunningTime="2025-12-06 06:35:20.275376225 +0000 UTC m=+4030.809146988" watchObservedRunningTime="2025-12-06 06:35:20.28483632 +0000 UTC m=+4030.818607073" Dec 06 06:35:23 crc kubenswrapper[4958]: I1206 06:35:23.905301 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:23 crc kubenswrapper[4958]: I1206 06:35:23.905724 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:23 crc kubenswrapper[4958]: I1206 06:35:23.948540 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:24 crc kubenswrapper[4958]: I1206 06:35:24.350852 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:24 crc kubenswrapper[4958]: I1206 06:35:24.419041 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:26 crc kubenswrapper[4958]: I1206 06:35:26.313608 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2bgl9" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="registry-server" containerID="cri-o://0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c" gracePeriod=2 Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.279110 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.322041 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jg2fw\" (UniqueName: \"kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw\") pod \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.322087 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content\") pod \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.322205 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities\") pod \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\" (UID: \"8d6cbaef-fe9d-425e-9923-44ff5c84163e\") " Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.324145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities" (OuterVolumeSpecName: "utilities") pod "8d6cbaef-fe9d-425e-9923-44ff5c84163e" (UID: "8d6cbaef-fe9d-425e-9923-44ff5c84163e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332497 4958 generic.go:334] "Generic (PLEG): container finished" podID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerID="0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c" exitCode=0 Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw" (OuterVolumeSpecName: "kube-api-access-jg2fw") pod "8d6cbaef-fe9d-425e-9923-44ff5c84163e" (UID: "8d6cbaef-fe9d-425e-9923-44ff5c84163e"). InnerVolumeSpecName "kube-api-access-jg2fw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerDied","Data":"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c"} Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332585 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2bgl9" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2bgl9" event={"ID":"8d6cbaef-fe9d-425e-9923-44ff5c84163e","Type":"ContainerDied","Data":"bd830efb39455a772d8e8f0d5c24e51752f9d64b0fe1d9172c55d42fa87a9b56"} Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.332633 4958 scope.go:117] "RemoveContainer" containerID="0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.384172 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d6cbaef-fe9d-425e-9923-44ff5c84163e" (UID: "8d6cbaef-fe9d-425e-9923-44ff5c84163e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.393152 4958 scope.go:117] "RemoveContainer" containerID="10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.416976 4958 scope.go:117] "RemoveContainer" containerID="c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.424564 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.424591 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6cbaef-fe9d-425e-9923-44ff5c84163e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.424601 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jg2fw\" (UniqueName: \"kubernetes.io/projected/8d6cbaef-fe9d-425e-9923-44ff5c84163e-kube-api-access-jg2fw\") on node \"crc\" DevicePath \"\"" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.457016 4958 scope.go:117] "RemoveContainer" containerID="0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c" Dec 06 06:35:27 crc kubenswrapper[4958]: E1206 06:35:27.457417 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c\": container with ID starting with 0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c not found: ID does not exist" containerID="0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.457463 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c"} err="failed to get container status \"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c\": rpc error: code = NotFound desc = could not find container \"0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c\": container with ID starting with 0c24d22bec2df4e8c76a3420d1a07f4f75735c676fd555aa5c4aeb5ae22bae8c not found: ID does not exist" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.457525 4958 scope.go:117] "RemoveContainer" containerID="10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be" Dec 06 06:35:27 crc kubenswrapper[4958]: E1206 06:35:27.458026 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be\": container with ID starting with 10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be not found: ID does not exist" containerID="10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.458059 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be"} err="failed to get container status \"10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be\": rpc error: code = NotFound desc = could not find container \"10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be\": container with ID starting with 10bcc9009c89686015d579407fb4f4e69cd5ef034f0b1765bca1e964f16f47be not found: ID does not exist" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.458098 4958 scope.go:117] "RemoveContainer" containerID="c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e" Dec 06 06:35:27 crc kubenswrapper[4958]: E1206 06:35:27.458467 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e\": container with ID starting with c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e not found: ID does not exist" containerID="c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.458513 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e"} err="failed to get container status \"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e\": rpc error: code = NotFound desc = could not find container \"c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e\": container with ID starting with c6186149833414bf96797ec60c9ec152bdd7c86bec4cff2c18f5ef82c96db55e not found: ID does not exist" Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.672044 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.680543 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2bgl9"] Dec 06 06:35:27 crc kubenswrapper[4958]: I1206 06:35:27.778586 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" path="/var/lib/kubelet/pods/8d6cbaef-fe9d-425e-9923-44ff5c84163e/volumes" Dec 06 06:35:32 crc kubenswrapper[4958]: I1206 06:35:32.762769 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:35:32 crc kubenswrapper[4958]: E1206 06:35:32.764057 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:35:45 crc kubenswrapper[4958]: I1206 06:35:45.763150 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:35:45 crc kubenswrapper[4958]: E1206 06:35:45.764550 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:35:58 crc kubenswrapper[4958]: I1206 06:35:58.762258 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:35:58 crc kubenswrapper[4958]: E1206 06:35:58.763216 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:36:12 crc kubenswrapper[4958]: I1206 06:36:12.761630 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:36:12 crc kubenswrapper[4958]: E1206 06:36:12.762590 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.253806 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:20 crc kubenswrapper[4958]: E1206 06:36:20.254781 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="extract-content" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.254796 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="extract-content" Dec 06 06:36:20 crc kubenswrapper[4958]: E1206 06:36:20.254815 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="extract-utilities" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.254821 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="extract-utilities" Dec 06 06:36:20 crc kubenswrapper[4958]: E1206 06:36:20.254835 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="registry-server" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.254842 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="registry-server" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.255055 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6cbaef-fe9d-425e-9923-44ff5c84163e" containerName="registry-server" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.256630 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.274696 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.409124 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmr4v\" (UniqueName: \"kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.409219 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.409350 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.511536 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.511989 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.512029 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.512129 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmr4v\" (UniqueName: \"kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:20 crc kubenswrapper[4958]: I1206 06:36:20.512274 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:21 crc kubenswrapper[4958]: I1206 06:36:21.129603 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmr4v\" (UniqueName: \"kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v\") pod \"redhat-operators-s9zkg\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:21 crc kubenswrapper[4958]: I1206 06:36:21.176699 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:21 crc kubenswrapper[4958]: I1206 06:36:21.615400 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:21 crc kubenswrapper[4958]: I1206 06:36:21.882153 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerStarted","Data":"088f4546611d3d10ff92e8c1e98c75bbfa2397db72c104288f1160c759d8987d"} Dec 06 06:36:22 crc kubenswrapper[4958]: I1206 06:36:22.893418 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerDied","Data":"783f4e2a3a27160e6a97c18d2f0731924512d9861d552b623bd792b300dcfe18"} Dec 06 06:36:22 crc kubenswrapper[4958]: I1206 06:36:22.894012 4958 generic.go:334] "Generic (PLEG): container finished" podID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerID="783f4e2a3a27160e6a97c18d2f0731924512d9861d552b623bd792b300dcfe18" exitCode=0 Dec 06 06:36:22 crc kubenswrapper[4958]: I1206 06:36:22.895986 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:36:23 crc kubenswrapper[4958]: I1206 06:36:23.904697 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerStarted","Data":"5c2b6e29372efad69c7ec8f53e59598e478702a7a7365f2fc1ddddd05e709cdf"} Dec 06 06:36:25 crc kubenswrapper[4958]: I1206 06:36:25.763116 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:36:25 crc kubenswrapper[4958]: E1206 06:36:25.763769 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:36:27 crc kubenswrapper[4958]: I1206 06:36:27.946929 4958 generic.go:334] "Generic (PLEG): container finished" podID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerID="5c2b6e29372efad69c7ec8f53e59598e478702a7a7365f2fc1ddddd05e709cdf" exitCode=0 Dec 06 06:36:27 crc kubenswrapper[4958]: I1206 06:36:27.947058 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerDied","Data":"5c2b6e29372efad69c7ec8f53e59598e478702a7a7365f2fc1ddddd05e709cdf"} Dec 06 06:36:28 crc kubenswrapper[4958]: I1206 06:36:28.958074 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerStarted","Data":"cbbec490e89807de53bbe08368628a462a81c6b190e12afcb8068803bc343b4f"} Dec 06 06:36:28 crc kubenswrapper[4958]: I1206 06:36:28.984661 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s9zkg" podStartSLOduration=3.510077291 podStartE2EDuration="8.984638574s" podCreationTimestamp="2025-12-06 06:36:20 +0000 UTC" firstStartedPulling="2025-12-06 06:36:22.895660838 +0000 UTC m=+4093.429431611" lastFinishedPulling="2025-12-06 06:36:28.370222141 +0000 UTC m=+4098.903992894" observedRunningTime="2025-12-06 06:36:28.976934057 +0000 UTC m=+4099.510704830" watchObservedRunningTime="2025-12-06 06:36:28.984638574 +0000 UTC m=+4099.518409337" Dec 06 06:36:31 crc kubenswrapper[4958]: I1206 06:36:31.186442 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:31 crc kubenswrapper[4958]: I1206 06:36:31.187749 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:32 crc kubenswrapper[4958]: I1206 06:36:32.236084 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s9zkg" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="registry-server" probeResult="failure" output=< Dec 06 06:36:32 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 06:36:32 crc kubenswrapper[4958]: > Dec 06 06:36:39 crc kubenswrapper[4958]: I1206 06:36:39.772230 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:36:39 crc kubenswrapper[4958]: E1206 06:36:39.774932 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:36:41 crc kubenswrapper[4958]: I1206 06:36:41.227247 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:41 crc kubenswrapper[4958]: I1206 06:36:41.278525 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:42 crc kubenswrapper[4958]: I1206 06:36:42.441016 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:43 crc kubenswrapper[4958]: I1206 06:36:43.082484 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s9zkg" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="registry-server" containerID="cri-o://cbbec490e89807de53bbe08368628a462a81c6b190e12afcb8068803bc343b4f" gracePeriod=2 Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.093572 4958 generic.go:334] "Generic (PLEG): container finished" podID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerID="cbbec490e89807de53bbe08368628a462a81c6b190e12afcb8068803bc343b4f" exitCode=0 Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.093643 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerDied","Data":"cbbec490e89807de53bbe08368628a462a81c6b190e12afcb8068803bc343b4f"} Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.834232 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.920888 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content\") pod \"27ecde86-2025-408d-84e7-d4a81a86f4dd\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.920975 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities\") pod \"27ecde86-2025-408d-84e7-d4a81a86f4dd\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.920993 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lmr4v\" (UniqueName: \"kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v\") pod \"27ecde86-2025-408d-84e7-d4a81a86f4dd\" (UID: \"27ecde86-2025-408d-84e7-d4a81a86f4dd\") " Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.921850 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities" (OuterVolumeSpecName: "utilities") pod "27ecde86-2025-408d-84e7-d4a81a86f4dd" (UID: "27ecde86-2025-408d-84e7-d4a81a86f4dd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:36:44 crc kubenswrapper[4958]: I1206 06:36:44.926727 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v" (OuterVolumeSpecName: "kube-api-access-lmr4v") pod "27ecde86-2025-408d-84e7-d4a81a86f4dd" (UID: "27ecde86-2025-408d-84e7-d4a81a86f4dd"). InnerVolumeSpecName "kube-api-access-lmr4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.023385 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.023417 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lmr4v\" (UniqueName: \"kubernetes.io/projected/27ecde86-2025-408d-84e7-d4a81a86f4dd-kube-api-access-lmr4v\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.032357 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "27ecde86-2025-408d-84e7-d4a81a86f4dd" (UID: "27ecde86-2025-408d-84e7-d4a81a86f4dd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.106076 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s9zkg" event={"ID":"27ecde86-2025-408d-84e7-d4a81a86f4dd","Type":"ContainerDied","Data":"088f4546611d3d10ff92e8c1e98c75bbfa2397db72c104288f1160c759d8987d"} Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.106128 4958 scope.go:117] "RemoveContainer" containerID="cbbec490e89807de53bbe08368628a462a81c6b190e12afcb8068803bc343b4f" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.106220 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s9zkg" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.125622 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/27ecde86-2025-408d-84e7-d4a81a86f4dd-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.132425 4958 scope.go:117] "RemoveContainer" containerID="5c2b6e29372efad69c7ec8f53e59598e478702a7a7365f2fc1ddddd05e709cdf" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.156506 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.166426 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s9zkg"] Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.167832 4958 scope.go:117] "RemoveContainer" containerID="783f4e2a3a27160e6a97c18d2f0731924512d9861d552b623bd792b300dcfe18" Dec 06 06:36:45 crc kubenswrapper[4958]: I1206 06:36:45.775981 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" path="/var/lib/kubelet/pods/27ecde86-2025-408d-84e7-d4a81a86f4dd/volumes" Dec 06 06:36:52 crc kubenswrapper[4958]: I1206 06:36:52.762714 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:36:52 crc kubenswrapper[4958]: E1206 06:36:52.763372 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:37:02 crc kubenswrapper[4958]: I1206 06:37:02.154165 4958 patch_prober.go:28] interesting pod/nmstate-webhook-5f6d4c5ccb-wm7wx container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.57:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:37:02 crc kubenswrapper[4958]: I1206 06:37:02.154720 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-wm7wx" podUID="5a48e60d-cf21-4b2f-bc99-eebce42f8832" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.57:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:37:06 crc kubenswrapper[4958]: I1206 06:37:06.378030 4958 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-48mm6 container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 06 06:37:06 crc kubenswrapper[4958]: I1206 06:37:06.378524 4958 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-48mm6" podUID="f0def05c-b821-4b02-b11a-05934c06c36f" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.68:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 06 06:37:08 crc kubenswrapper[4958]: I1206 06:37:08.734503 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 06 06:37:11 crc kubenswrapper[4958]: E1206 06:37:11.937214 4958 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="12.176s" Dec 06 06:37:11 crc kubenswrapper[4958]: I1206 06:37:11.942683 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:37:11 crc kubenswrapper[4958]: E1206 06:37:11.943050 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:37:11 crc kubenswrapper[4958]: I1206 06:37:11.950424 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" probeResult="failure" output=< Dec 06 06:37:11 crc kubenswrapper[4958]: Unkown error: Expecting value: line 1 column 1 (char 0) Dec 06 06:37:11 crc kubenswrapper[4958]: > Dec 06 06:37:12 crc kubenswrapper[4958]: I1206 06:37:12.831676 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" probeResult="failure" output=< Dec 06 06:37:12 crc kubenswrapper[4958]: Unkown error: Expecting value: line 1 column 1 (char 0) Dec 06 06:37:12 crc kubenswrapper[4958]: > Dec 06 06:37:12 crc kubenswrapper[4958]: I1206 06:37:12.831978 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Dec 06 06:37:12 crc kubenswrapper[4958]: I1206 06:37:12.832848 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"1998659182b691acde6a9b03c06c875ccf927d44d0bd8145e06aff139abcf6c2"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Dec 06 06:37:12 crc kubenswrapper[4958]: I1206 06:37:12.832954 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" containerID="cri-o://1998659182b691acde6a9b03c06c875ccf927d44d0bd8145e06aff139abcf6c2" gracePeriod=30 Dec 06 06:37:14 crc kubenswrapper[4958]: I1206 06:37:14.427012 4958 generic.go:334] "Generic (PLEG): container finished" podID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerID="1998659182b691acde6a9b03c06c875ccf927d44d0bd8145e06aff139abcf6c2" exitCode=0 Dec 06 06:37:14 crc kubenswrapper[4958]: I1206 06:37:14.427109 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerDied","Data":"1998659182b691acde6a9b03c06c875ccf927d44d0bd8145e06aff139abcf6c2"} Dec 06 06:37:21 crc kubenswrapper[4958]: I1206 06:37:21.498661 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"428c09d2-3c2a-4562-9295-3cf3da179f40","Type":"ContainerStarted","Data":"a3460c200835cfd31d5d01a0c360e68d352cd988dd2faaef90f406b741e5985e"} Dec 06 06:37:24 crc kubenswrapper[4958]: I1206 06:37:24.763065 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:37:24 crc kubenswrapper[4958]: E1206 06:37:24.764551 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:37:33 crc kubenswrapper[4958]: I1206 06:37:33.332459 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="8322abc0-31a3-4770-856d-e23e4d428204" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.218:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 06 06:37:36 crc kubenswrapper[4958]: I1206 06:37:36.166778 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="73c40f99-3a46-43d5-bab4-475cd389ea2c" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.200:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 06:37:37 crc kubenswrapper[4958]: I1206 06:37:37.762195 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:37:37 crc kubenswrapper[4958]: E1206 06:37:37.762702 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:37:51 crc kubenswrapper[4958]: I1206 06:37:51.762214 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:37:52 crc kubenswrapper[4958]: I1206 06:37:52.866401 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90"} Dec 06 06:40:09 crc kubenswrapper[4958]: I1206 06:40:09.866380 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:40:09 crc kubenswrapper[4958]: I1206 06:40:09.866914 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:40:39 crc kubenswrapper[4958]: I1206 06:40:39.866753 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:40:39 crc kubenswrapper[4958]: I1206 06:40:39.867278 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:41:09 crc kubenswrapper[4958]: I1206 06:41:09.866439 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:41:09 crc kubenswrapper[4958]: I1206 06:41:09.867172 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:41:09 crc kubenswrapper[4958]: I1206 06:41:09.867223 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:41:09 crc kubenswrapper[4958]: I1206 06:41:09.868192 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:41:09 crc kubenswrapper[4958]: I1206 06:41:09.868267 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90" gracePeriod=600 Dec 06 06:41:10 crc kubenswrapper[4958]: I1206 06:41:10.130625 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90" exitCode=0 Dec 06 06:41:10 crc kubenswrapper[4958]: I1206 06:41:10.130734 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90"} Dec 06 06:41:10 crc kubenswrapper[4958]: I1206 06:41:10.131152 4958 scope.go:117] "RemoveContainer" containerID="f265ce05f57a8068cc94a7942a1527aa2a691debe6890a7a9a9a78e3e4f27cf3" Dec 06 06:41:12 crc kubenswrapper[4958]: I1206 06:41:12.154702 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65"} Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.483642 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:33 crc kubenswrapper[4958]: E1206 06:42:33.485426 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="extract-utilities" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.485451 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="extract-utilities" Dec 06 06:42:33 crc kubenswrapper[4958]: E1206 06:42:33.485521 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="registry-server" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.485534 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="registry-server" Dec 06 06:42:33 crc kubenswrapper[4958]: E1206 06:42:33.485580 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="extract-content" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.485594 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="extract-content" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.486791 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="27ecde86-2025-408d-84e7-d4a81a86f4dd" containerName="registry-server" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.494339 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.513569 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.516508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr4l8\" (UniqueName: \"kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.517569 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.517673 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.619113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.619174 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.619204 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr4l8\" (UniqueName: \"kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.619959 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.620162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.643222 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr4l8\" (UniqueName: \"kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8\") pod \"community-operators-fnp5q\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:33 crc kubenswrapper[4958]: I1206 06:42:33.829787 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:34 crc kubenswrapper[4958]: I1206 06:42:34.315394 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:34 crc kubenswrapper[4958]: I1206 06:42:34.985409 4958 generic.go:334] "Generic (PLEG): container finished" podID="4036fc28-addf-485a-9656-7399f7e1e364" containerID="6cf9a585dbcc6eb2e9cb3552fdd9f372a7f7534925eece389a9701d4f1db4331" exitCode=0 Dec 06 06:42:34 crc kubenswrapper[4958]: I1206 06:42:34.985522 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerDied","Data":"6cf9a585dbcc6eb2e9cb3552fdd9f372a7f7534925eece389a9701d4f1db4331"} Dec 06 06:42:34 crc kubenswrapper[4958]: I1206 06:42:34.985805 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerStarted","Data":"2c0374b7e26e650fcf2814d798a5d8a545afee9e2294c1c05013a618364dec15"} Dec 06 06:42:34 crc kubenswrapper[4958]: I1206 06:42:34.988201 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:42:36 crc kubenswrapper[4958]: I1206 06:42:36.001341 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerStarted","Data":"67aba5f29e19cc576d28b47d38e8f9cf750baa2978f0c220297a7a44135b6f39"} Dec 06 06:42:38 crc kubenswrapper[4958]: I1206 06:42:38.045863 4958 generic.go:334] "Generic (PLEG): container finished" podID="4036fc28-addf-485a-9656-7399f7e1e364" containerID="67aba5f29e19cc576d28b47d38e8f9cf750baa2978f0c220297a7a44135b6f39" exitCode=0 Dec 06 06:42:38 crc kubenswrapper[4958]: I1206 06:42:38.046500 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerDied","Data":"67aba5f29e19cc576d28b47d38e8f9cf750baa2978f0c220297a7a44135b6f39"} Dec 06 06:42:39 crc kubenswrapper[4958]: I1206 06:42:39.062699 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerStarted","Data":"0a3d06bec727c17485970fbb86f9832cb0e6e6fccff0da0c5d9de5cb29af014d"} Dec 06 06:42:39 crc kubenswrapper[4958]: I1206 06:42:39.091317 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fnp5q" podStartSLOduration=2.619087104 podStartE2EDuration="6.091283637s" podCreationTimestamp="2025-12-06 06:42:33 +0000 UTC" firstStartedPulling="2025-12-06 06:42:34.988015374 +0000 UTC m=+4465.521786137" lastFinishedPulling="2025-12-06 06:42:38.460211917 +0000 UTC m=+4468.993982670" observedRunningTime="2025-12-06 06:42:39.088556693 +0000 UTC m=+4469.622327456" watchObservedRunningTime="2025-12-06 06:42:39.091283637 +0000 UTC m=+4469.625054450" Dec 06 06:42:43 crc kubenswrapper[4958]: I1206 06:42:43.830244 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:43 crc kubenswrapper[4958]: I1206 06:42:43.830728 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:43 crc kubenswrapper[4958]: I1206 06:42:43.898165 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:44 crc kubenswrapper[4958]: I1206 06:42:44.180241 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:44 crc kubenswrapper[4958]: I1206 06:42:44.240806 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:46 crc kubenswrapper[4958]: I1206 06:42:46.148197 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fnp5q" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="registry-server" containerID="cri-o://0a3d06bec727c17485970fbb86f9832cb0e6e6fccff0da0c5d9de5cb29af014d" gracePeriod=2 Dec 06 06:42:47 crc kubenswrapper[4958]: I1206 06:42:47.158189 4958 generic.go:334] "Generic (PLEG): container finished" podID="4036fc28-addf-485a-9656-7399f7e1e364" containerID="0a3d06bec727c17485970fbb86f9832cb0e6e6fccff0da0c5d9de5cb29af014d" exitCode=0 Dec 06 06:42:47 crc kubenswrapper[4958]: I1206 06:42:47.158258 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerDied","Data":"0a3d06bec727c17485970fbb86f9832cb0e6e6fccff0da0c5d9de5cb29af014d"} Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.015219 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.169731 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fnp5q" event={"ID":"4036fc28-addf-485a-9656-7399f7e1e364","Type":"ContainerDied","Data":"2c0374b7e26e650fcf2814d798a5d8a545afee9e2294c1c05013a618364dec15"} Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.169804 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fnp5q" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.170145 4958 scope.go:117] "RemoveContainer" containerID="0a3d06bec727c17485970fbb86f9832cb0e6e6fccff0da0c5d9de5cb29af014d" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.170460 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities\") pod \"4036fc28-addf-485a-9656-7399f7e1e364\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.170987 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pr4l8\" (UniqueName: \"kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8\") pod \"4036fc28-addf-485a-9656-7399f7e1e364\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.171332 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities" (OuterVolumeSpecName: "utilities") pod "4036fc28-addf-485a-9656-7399f7e1e364" (UID: "4036fc28-addf-485a-9656-7399f7e1e364"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.171370 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content\") pod \"4036fc28-addf-485a-9656-7399f7e1e364\" (UID: \"4036fc28-addf-485a-9656-7399f7e1e364\") " Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.174262 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.179733 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8" (OuterVolumeSpecName: "kube-api-access-pr4l8") pod "4036fc28-addf-485a-9656-7399f7e1e364" (UID: "4036fc28-addf-485a-9656-7399f7e1e364"). InnerVolumeSpecName "kube-api-access-pr4l8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.196328 4958 scope.go:117] "RemoveContainer" containerID="67aba5f29e19cc576d28b47d38e8f9cf750baa2978f0c220297a7a44135b6f39" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.223024 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4036fc28-addf-485a-9656-7399f7e1e364" (UID: "4036fc28-addf-485a-9656-7399f7e1e364"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.245061 4958 scope.go:117] "RemoveContainer" containerID="6cf9a585dbcc6eb2e9cb3552fdd9f372a7f7534925eece389a9701d4f1db4331" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.277850 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pr4l8\" (UniqueName: \"kubernetes.io/projected/4036fc28-addf-485a-9656-7399f7e1e364-kube-api-access-pr4l8\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.277924 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4036fc28-addf-485a-9656-7399f7e1e364-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.535090 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:48 crc kubenswrapper[4958]: I1206 06:42:48.549294 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fnp5q"] Dec 06 06:42:49 crc kubenswrapper[4958]: I1206 06:42:49.773453 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4036fc28-addf-485a-9656-7399f7e1e364" path="/var/lib/kubelet/pods/4036fc28-addf-485a-9656-7399f7e1e364/volumes" Dec 06 06:43:39 crc kubenswrapper[4958]: I1206 06:43:39.866029 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:43:39 crc kubenswrapper[4958]: I1206 06:43:39.866661 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:44:09 crc kubenswrapper[4958]: I1206 06:44:09.866259 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:44:09 crc kubenswrapper[4958]: I1206 06:44:09.866810 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:44:39 crc kubenswrapper[4958]: I1206 06:44:39.866245 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:44:39 crc kubenswrapper[4958]: I1206 06:44:39.866870 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:44:39 crc kubenswrapper[4958]: I1206 06:44:39.866915 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:44:39 crc kubenswrapper[4958]: I1206 06:44:39.867652 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:44:39 crc kubenswrapper[4958]: I1206 06:44:39.867712 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" gracePeriod=600 Dec 06 06:44:40 crc kubenswrapper[4958]: I1206 06:44:40.287607 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" exitCode=0 Dec 06 06:44:40 crc kubenswrapper[4958]: I1206 06:44:40.287653 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65"} Dec 06 06:44:40 crc kubenswrapper[4958]: I1206 06:44:40.287686 4958 scope.go:117] "RemoveContainer" containerID="bcbae82c76f21c6f13ba52fa79677661a3ca8215ccc632e979448eae960c6a90" Dec 06 06:44:40 crc kubenswrapper[4958]: E1206 06:44:40.764069 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:44:41 crc kubenswrapper[4958]: I1206 06:44:41.299045 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:44:41 crc kubenswrapper[4958]: E1206 06:44:41.300366 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:44:53 crc kubenswrapper[4958]: I1206 06:44:53.762764 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:44:53 crc kubenswrapper[4958]: E1206 06:44:53.765166 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.378438 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:44:58 crc kubenswrapper[4958]: E1206 06:44:58.379319 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="registry-server" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.379336 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="registry-server" Dec 06 06:44:58 crc kubenswrapper[4958]: E1206 06:44:58.379363 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="extract-utilities" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.379372 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="extract-utilities" Dec 06 06:44:58 crc kubenswrapper[4958]: E1206 06:44:58.379395 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="extract-content" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.379402 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="extract-content" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.379703 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="4036fc28-addf-485a-9656-7399f7e1e364" containerName="registry-server" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.381346 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.398463 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.479824 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.479898 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.480034 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mqtf\" (UniqueName: \"kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.581711 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.581761 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.581801 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mqtf\" (UniqueName: \"kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.582388 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.582431 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.603236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mqtf\" (UniqueName: \"kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf\") pod \"redhat-marketplace-2256j\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:58 crc kubenswrapper[4958]: I1206 06:44:58.709526 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:44:59 crc kubenswrapper[4958]: I1206 06:44:59.196332 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:44:59 crc kubenswrapper[4958]: I1206 06:44:59.466408 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerStarted","Data":"9c57f7aa1b7e0a4551d48bd48472f1ef67460d3a7e822f16ad1c52b218a54810"} Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.192695 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7"] Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.194859 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.198655 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.198688 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.218085 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvcq\" (UniqueName: \"kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.218515 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.218596 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.228065 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7"] Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.320618 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.320696 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.320760 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzvcq\" (UniqueName: \"kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.321733 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.329771 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.338106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzvcq\" (UniqueName: \"kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq\") pod \"collect-profiles-29416725-mxfd7\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.483595 4958 generic.go:334] "Generic (PLEG): container finished" podID="40418e6f-3340-43cd-8073-e54eabe96340" containerID="8e5fe15600c442c13ae9a37a4ce5532979c264c352c991555c61916c251b6946" exitCode=0 Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.483639 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerDied","Data":"8e5fe15600c442c13ae9a37a4ce5532979c264c352c991555c61916c251b6946"} Dec 06 06:45:00 crc kubenswrapper[4958]: I1206 06:45:00.527826 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:01 crc kubenswrapper[4958]: I1206 06:45:01.000682 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7"] Dec 06 06:45:01 crc kubenswrapper[4958]: W1206 06:45:01.002086 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod254444a5_8159_441c_922e_7a6751e0f1d1.slice/crio-0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5 WatchSource:0}: Error finding container 0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5: Status 404 returned error can't find the container with id 0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5 Dec 06 06:45:01 crc kubenswrapper[4958]: I1206 06:45:01.492968 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" event={"ID":"254444a5-8159-441c-922e-7a6751e0f1d1","Type":"ContainerStarted","Data":"0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5"} Dec 06 06:45:03 crc kubenswrapper[4958]: I1206 06:45:03.514595 4958 generic.go:334] "Generic (PLEG): container finished" podID="254444a5-8159-441c-922e-7a6751e0f1d1" containerID="1a4b4813652ad6b8bfb60aa2eec11d327ca2373e00d860705dd6f06e9eb2b536" exitCode=0 Dec 06 06:45:03 crc kubenswrapper[4958]: I1206 06:45:03.514678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" event={"ID":"254444a5-8159-441c-922e-7a6751e0f1d1","Type":"ContainerDied","Data":"1a4b4813652ad6b8bfb60aa2eec11d327ca2373e00d860705dd6f06e9eb2b536"} Dec 06 06:45:04 crc kubenswrapper[4958]: I1206 06:45:04.525266 4958 generic.go:334] "Generic (PLEG): container finished" podID="40418e6f-3340-43cd-8073-e54eabe96340" containerID="b499559ea06d0f2e7fb0cdf5dfd846dc293e0d9c19bcd6334aa3e0b56cacacc2" exitCode=0 Dec 06 06:45:04 crc kubenswrapper[4958]: I1206 06:45:04.525322 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerDied","Data":"b499559ea06d0f2e7fb0cdf5dfd846dc293e0d9c19bcd6334aa3e0b56cacacc2"} Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.348890 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.535016 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzvcq\" (UniqueName: \"kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq\") pod \"254444a5-8159-441c-922e-7a6751e0f1d1\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.535149 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume\") pod \"254444a5-8159-441c-922e-7a6751e0f1d1\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.535388 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume\") pod \"254444a5-8159-441c-922e-7a6751e0f1d1\" (UID: \"254444a5-8159-441c-922e-7a6751e0f1d1\") " Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.535899 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume" (OuterVolumeSpecName: "config-volume") pod "254444a5-8159-441c-922e-7a6751e0f1d1" (UID: "254444a5-8159-441c-922e-7a6751e0f1d1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.538909 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" event={"ID":"254444a5-8159-441c-922e-7a6751e0f1d1","Type":"ContainerDied","Data":"0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5"} Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.538954 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0766916af7e0d3d293b5268af668f20e6f164a113e8a05f4ceb5b80d40b45cf5" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.538989 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.541495 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq" (OuterVolumeSpecName: "kube-api-access-pzvcq") pod "254444a5-8159-441c-922e-7a6751e0f1d1" (UID: "254444a5-8159-441c-922e-7a6751e0f1d1"). InnerVolumeSpecName "kube-api-access-pzvcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.542028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "254444a5-8159-441c-922e-7a6751e0f1d1" (UID: "254444a5-8159-441c-922e-7a6751e0f1d1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.638264 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/254444a5-8159-441c-922e-7a6751e0f1d1-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.638316 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzvcq\" (UniqueName: \"kubernetes.io/projected/254444a5-8159-441c-922e-7a6751e0f1d1-kube-api-access-pzvcq\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.638329 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/254444a5-8159-441c-922e-7a6751e0f1d1-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:05 crc kubenswrapper[4958]: I1206 06:45:05.762241 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:45:05 crc kubenswrapper[4958]: E1206 06:45:05.762774 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:45:06 crc kubenswrapper[4958]: I1206 06:45:06.492793 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p"] Dec 06 06:45:06 crc kubenswrapper[4958]: I1206 06:45:06.514957 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416680-9lz9p"] Dec 06 06:45:06 crc kubenswrapper[4958]: I1206 06:45:06.565592 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerStarted","Data":"7f838fb117b7821acb4433bb96d53d58235a6839304122e81002c233cb69e480"} Dec 06 06:45:06 crc kubenswrapper[4958]: I1206 06:45:06.604475 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2256j" podStartSLOduration=3.464094984 podStartE2EDuration="8.604451621s" podCreationTimestamp="2025-12-06 06:44:58 +0000 UTC" firstStartedPulling="2025-12-06 06:45:00.488313277 +0000 UTC m=+4611.022084040" lastFinishedPulling="2025-12-06 06:45:05.628669914 +0000 UTC m=+4616.162440677" observedRunningTime="2025-12-06 06:45:06.598425399 +0000 UTC m=+4617.132196162" watchObservedRunningTime="2025-12-06 06:45:06.604451621 +0000 UTC m=+4617.138222384" Dec 06 06:45:07 crc kubenswrapper[4958]: I1206 06:45:07.774434 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bafcdafc-d17c-4271-ad12-0a8fed0a33cc" path="/var/lib/kubelet/pods/bafcdafc-d17c-4271-ad12-0a8fed0a33cc/volumes" Dec 06 06:45:08 crc kubenswrapper[4958]: I1206 06:45:08.710060 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:08 crc kubenswrapper[4958]: I1206 06:45:08.710573 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:08 crc kubenswrapper[4958]: I1206 06:45:08.757153 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:10 crc kubenswrapper[4958]: I1206 06:45:10.651224 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:10 crc kubenswrapper[4958]: I1206 06:45:10.702784 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:45:12 crc kubenswrapper[4958]: I1206 06:45:12.616085 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2256j" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="registry-server" containerID="cri-o://7f838fb117b7821acb4433bb96d53d58235a6839304122e81002c233cb69e480" gracePeriod=2 Dec 06 06:45:13 crc kubenswrapper[4958]: I1206 06:45:13.627606 4958 generic.go:334] "Generic (PLEG): container finished" podID="40418e6f-3340-43cd-8073-e54eabe96340" containerID="7f838fb117b7821acb4433bb96d53d58235a6839304122e81002c233cb69e480" exitCode=0 Dec 06 06:45:13 crc kubenswrapper[4958]: I1206 06:45:13.627719 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerDied","Data":"7f838fb117b7821acb4433bb96d53d58235a6839304122e81002c233cb69e480"} Dec 06 06:45:15 crc kubenswrapper[4958]: I1206 06:45:15.644504 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2256j" event={"ID":"40418e6f-3340-43cd-8073-e54eabe96340","Type":"ContainerDied","Data":"9c57f7aa1b7e0a4551d48bd48472f1ef67460d3a7e822f16ad1c52b218a54810"} Dec 06 06:45:15 crc kubenswrapper[4958]: I1206 06:45:15.645070 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c57f7aa1b7e0a4551d48bd48472f1ef67460d3a7e822f16ad1c52b218a54810" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.080939 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.259314 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities\") pod \"40418e6f-3340-43cd-8073-e54eabe96340\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.259743 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content\") pod \"40418e6f-3340-43cd-8073-e54eabe96340\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.259877 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mqtf\" (UniqueName: \"kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf\") pod \"40418e6f-3340-43cd-8073-e54eabe96340\" (UID: \"40418e6f-3340-43cd-8073-e54eabe96340\") " Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.260720 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities" (OuterVolumeSpecName: "utilities") pod "40418e6f-3340-43cd-8073-e54eabe96340" (UID: "40418e6f-3340-43cd-8073-e54eabe96340"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.272796 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf" (OuterVolumeSpecName: "kube-api-access-9mqtf") pod "40418e6f-3340-43cd-8073-e54eabe96340" (UID: "40418e6f-3340-43cd-8073-e54eabe96340"). InnerVolumeSpecName "kube-api-access-9mqtf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.287263 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40418e6f-3340-43cd-8073-e54eabe96340" (UID: "40418e6f-3340-43cd-8073-e54eabe96340"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.361966 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mqtf\" (UniqueName: \"kubernetes.io/projected/40418e6f-3340-43cd-8073-e54eabe96340-kube-api-access-9mqtf\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.362002 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.362012 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40418e6f-3340-43cd-8073-e54eabe96340-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.654466 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2256j" Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.698291 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.715751 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2256j"] Dec 06 06:45:16 crc kubenswrapper[4958]: I1206 06:45:16.762672 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:45:16 crc kubenswrapper[4958]: E1206 06:45:16.763118 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:45:17 crc kubenswrapper[4958]: I1206 06:45:17.772748 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40418e6f-3340-43cd-8073-e54eabe96340" path="/var/lib/kubelet/pods/40418e6f-3340-43cd-8073-e54eabe96340/volumes" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.309745 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:45:24 crc kubenswrapper[4958]: E1206 06:45:24.310680 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="254444a5-8159-441c-922e-7a6751e0f1d1" containerName="collect-profiles" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310693 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="254444a5-8159-441c-922e-7a6751e0f1d1" containerName="collect-profiles" Dec 06 06:45:24 crc kubenswrapper[4958]: E1206 06:45:24.310723 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="registry-server" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310729 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="registry-server" Dec 06 06:45:24 crc kubenswrapper[4958]: E1206 06:45:24.310745 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="extract-content" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310751 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="extract-content" Dec 06 06:45:24 crc kubenswrapper[4958]: E1206 06:45:24.310763 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="extract-utilities" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310768 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="extract-utilities" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310950 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="254444a5-8159-441c-922e-7a6751e0f1d1" containerName="collect-profiles" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.310976 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="40418e6f-3340-43cd-8073-e54eabe96340" containerName="registry-server" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.312293 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.322823 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.447896 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.447969 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.448045 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqz2p\" (UniqueName: \"kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.549981 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqz2p\" (UniqueName: \"kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.550224 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.550272 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.550878 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.551013 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.573547 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqz2p\" (UniqueName: \"kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p\") pod \"certified-operators-qwzhh\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:24 crc kubenswrapper[4958]: I1206 06:45:24.642682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:25 crc kubenswrapper[4958]: I1206 06:45:25.212425 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:45:25 crc kubenswrapper[4958]: I1206 06:45:25.740004 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerStarted","Data":"6be22bc5e37cd3954c3d2ee1dba5d21b14e33712a38de35e3bf48cabbc82c645"} Dec 06 06:45:26 crc kubenswrapper[4958]: I1206 06:45:26.751931 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerStarted","Data":"d7e7004cbcbd208da1a50d518c171bc9371242da0551ee5cdf0d33bec52a5e77"} Dec 06 06:45:27 crc kubenswrapper[4958]: I1206 06:45:27.763190 4958 generic.go:334] "Generic (PLEG): container finished" podID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerID="d7e7004cbcbd208da1a50d518c171bc9371242da0551ee5cdf0d33bec52a5e77" exitCode=0 Dec 06 06:45:27 crc kubenswrapper[4958]: I1206 06:45:27.782596 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerDied","Data":"d7e7004cbcbd208da1a50d518c171bc9371242da0551ee5cdf0d33bec52a5e77"} Dec 06 06:45:30 crc kubenswrapper[4958]: I1206 06:45:30.762404 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:45:30 crc kubenswrapper[4958]: E1206 06:45:30.762665 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:45:32 crc kubenswrapper[4958]: I1206 06:45:32.634020 4958 scope.go:117] "RemoveContainer" containerID="59527b2596c4b6f68867b7891527a8d54a6151d269013e0563b724d2d390ee52" Dec 06 06:45:38 crc kubenswrapper[4958]: I1206 06:45:38.879878 4958 generic.go:334] "Generic (PLEG): container finished" podID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerID="d9c51b317de1b7af5163c977f828afb9f55d5b156718d9be34ba84989ad6f878" exitCode=0 Dec 06 06:45:38 crc kubenswrapper[4958]: I1206 06:45:38.879941 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerDied","Data":"d9c51b317de1b7af5163c977f828afb9f55d5b156718d9be34ba84989ad6f878"} Dec 06 06:45:41 crc kubenswrapper[4958]: I1206 06:45:41.762888 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:45:41 crc kubenswrapper[4958]: E1206 06:45:41.763515 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:45:48 crc kubenswrapper[4958]: I1206 06:45:48.995666 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerStarted","Data":"1547b2678a77e58fccf682207ea9832e64dba7c298dbad913c41dc76040635c8"} Dec 06 06:45:50 crc kubenswrapper[4958]: I1206 06:45:50.025754 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qwzhh" podStartSLOduration=14.441733995 podStartE2EDuration="26.025733353s" podCreationTimestamp="2025-12-06 06:45:24 +0000 UTC" firstStartedPulling="2025-12-06 06:45:27.765514579 +0000 UTC m=+4638.299285342" lastFinishedPulling="2025-12-06 06:45:39.349513937 +0000 UTC m=+4649.883284700" observedRunningTime="2025-12-06 06:45:50.02412101 +0000 UTC m=+4660.557891793" watchObservedRunningTime="2025-12-06 06:45:50.025733353 +0000 UTC m=+4660.559504116" Dec 06 06:45:54 crc kubenswrapper[4958]: I1206 06:45:54.642764 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:54 crc kubenswrapper[4958]: I1206 06:45:54.644488 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:54 crc kubenswrapper[4958]: I1206 06:45:54.713709 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:54 crc kubenswrapper[4958]: I1206 06:45:54.763178 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:45:54 crc kubenswrapper[4958]: E1206 06:45:54.763464 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:45:55 crc kubenswrapper[4958]: I1206 06:45:55.099163 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:55 crc kubenswrapper[4958]: I1206 06:45:55.515071 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:45:57 crc kubenswrapper[4958]: I1206 06:45:57.085937 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qwzhh" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="registry-server" containerID="cri-o://1547b2678a77e58fccf682207ea9832e64dba7c298dbad913c41dc76040635c8" gracePeriod=2 Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.115461 4958 generic.go:334] "Generic (PLEG): container finished" podID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerID="1547b2678a77e58fccf682207ea9832e64dba7c298dbad913c41dc76040635c8" exitCode=0 Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.115551 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerDied","Data":"1547b2678a77e58fccf682207ea9832e64dba7c298dbad913c41dc76040635c8"} Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.598716 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.690374 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqz2p\" (UniqueName: \"kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p\") pod \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.690502 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content\") pod \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.690687 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities\") pod \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\" (UID: \"9a7a0f54-59bd-42a8-bc49-e7d93b012493\") " Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.691999 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities" (OuterVolumeSpecName: "utilities") pod "9a7a0f54-59bd-42a8-bc49-e7d93b012493" (UID: "9a7a0f54-59bd-42a8-bc49-e7d93b012493"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.697237 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p" (OuterVolumeSpecName: "kube-api-access-fqz2p") pod "9a7a0f54-59bd-42a8-bc49-e7d93b012493" (UID: "9a7a0f54-59bd-42a8-bc49-e7d93b012493"). InnerVolumeSpecName "kube-api-access-fqz2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.794440 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.794487 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqz2p\" (UniqueName: \"kubernetes.io/projected/9a7a0f54-59bd-42a8-bc49-e7d93b012493-kube-api-access-fqz2p\") on node \"crc\" DevicePath \"\"" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.804418 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a7a0f54-59bd-42a8-bc49-e7d93b012493" (UID: "9a7a0f54-59bd-42a8-bc49-e7d93b012493"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:45:59 crc kubenswrapper[4958]: I1206 06:45:59.897663 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a7a0f54-59bd-42a8-bc49-e7d93b012493-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.130269 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qwzhh" event={"ID":"9a7a0f54-59bd-42a8-bc49-e7d93b012493","Type":"ContainerDied","Data":"6be22bc5e37cd3954c3d2ee1dba5d21b14e33712a38de35e3bf48cabbc82c645"} Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.130335 4958 scope.go:117] "RemoveContainer" containerID="1547b2678a77e58fccf682207ea9832e64dba7c298dbad913c41dc76040635c8" Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.130433 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qwzhh" Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.151881 4958 scope.go:117] "RemoveContainer" containerID="d9c51b317de1b7af5163c977f828afb9f55d5b156718d9be34ba84989ad6f878" Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.202041 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.213721 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qwzhh"] Dec 06 06:46:00 crc kubenswrapper[4958]: I1206 06:46:00.650599 4958 scope.go:117] "RemoveContainer" containerID="d7e7004cbcbd208da1a50d518c171bc9371242da0551ee5cdf0d33bec52a5e77" Dec 06 06:46:01 crc kubenswrapper[4958]: I1206 06:46:01.786765 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" path="/var/lib/kubelet/pods/9a7a0f54-59bd-42a8-bc49-e7d93b012493/volumes" Dec 06 06:46:07 crc kubenswrapper[4958]: I1206 06:46:07.762983 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:46:07 crc kubenswrapper[4958]: E1206 06:46:07.764015 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:46:19 crc kubenswrapper[4958]: I1206 06:46:19.774181 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:46:19 crc kubenswrapper[4958]: E1206 06:46:19.775105 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:46:33 crc kubenswrapper[4958]: I1206 06:46:33.762200 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:46:33 crc kubenswrapper[4958]: E1206 06:46:33.763077 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:46:48 crc kubenswrapper[4958]: I1206 06:46:48.762789 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:46:48 crc kubenswrapper[4958]: E1206 06:46:48.763532 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:46:59 crc kubenswrapper[4958]: I1206 06:46:59.762795 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:46:59 crc kubenswrapper[4958]: E1206 06:46:59.763751 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:47:13 crc kubenswrapper[4958]: I1206 06:47:13.763321 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:47:13 crc kubenswrapper[4958]: E1206 06:47:13.766797 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:47:24 crc kubenswrapper[4958]: I1206 06:47:24.762263 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:47:24 crc kubenswrapper[4958]: E1206 06:47:24.763353 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:47:35 crc kubenswrapper[4958]: I1206 06:47:35.762159 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:47:35 crc kubenswrapper[4958]: E1206 06:47:35.763098 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:47:46 crc kubenswrapper[4958]: I1206 06:47:46.762312 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:47:46 crc kubenswrapper[4958]: E1206 06:47:46.763247 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:48:01 crc kubenswrapper[4958]: I1206 06:48:01.761746 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:48:01 crc kubenswrapper[4958]: E1206 06:48:01.762396 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:48:16 crc kubenswrapper[4958]: I1206 06:48:16.763468 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:48:16 crc kubenswrapper[4958]: E1206 06:48:16.764720 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:48:31 crc kubenswrapper[4958]: I1206 06:48:31.763987 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:48:31 crc kubenswrapper[4958]: E1206 06:48:31.765636 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:48:43 crc kubenswrapper[4958]: I1206 06:48:43.762219 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:48:43 crc kubenswrapper[4958]: E1206 06:48:43.763260 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:48:57 crc kubenswrapper[4958]: I1206 06:48:57.770258 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:48:57 crc kubenswrapper[4958]: E1206 06:48:57.771976 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:49:09 crc kubenswrapper[4958]: I1206 06:49:09.768348 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:49:09 crc kubenswrapper[4958]: E1206 06:49:09.769429 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:49:23 crc kubenswrapper[4958]: I1206 06:49:23.762432 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:49:23 crc kubenswrapper[4958]: E1206 06:49:23.763337 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:49:38 crc kubenswrapper[4958]: I1206 06:49:38.762701 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:49:38 crc kubenswrapper[4958]: E1206 06:49:38.763534 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:49:50 crc kubenswrapper[4958]: I1206 06:49:50.762405 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:49:51 crc kubenswrapper[4958]: I1206 06:49:51.451271 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6"} Dec 06 06:51:32 crc kubenswrapper[4958]: I1206 06:51:32.811256 4958 scope.go:117] "RemoveContainer" containerID="b499559ea06d0f2e7fb0cdf5dfd846dc293e0d9c19bcd6334aa3e0b56cacacc2" Dec 06 06:51:32 crc kubenswrapper[4958]: I1206 06:51:32.855275 4958 scope.go:117] "RemoveContainer" containerID="8e5fe15600c442c13ae9a37a4ce5532979c264c352c991555c61916c251b6946" Dec 06 06:51:32 crc kubenswrapper[4958]: I1206 06:51:32.904022 4958 scope.go:117] "RemoveContainer" containerID="7f838fb117b7821acb4433bb96d53d58235a6839304122e81002c233cb69e480" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.275761 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:03 crc kubenswrapper[4958]: E1206 06:52:03.276721 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="extract-utilities" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.276735 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="extract-utilities" Dec 06 06:52:03 crc kubenswrapper[4958]: E1206 06:52:03.276745 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="registry-server" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.276751 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="registry-server" Dec 06 06:52:03 crc kubenswrapper[4958]: E1206 06:52:03.276762 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="extract-content" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.276769 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="extract-content" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.277020 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a7a0f54-59bd-42a8-bc49-e7d93b012493" containerName="registry-server" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.279061 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.305914 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.464938 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.465021 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfpc9\" (UniqueName: \"kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.465139 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.566867 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.566972 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.567027 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfpc9\" (UniqueName: \"kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.567403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.567724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.592308 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfpc9\" (UniqueName: \"kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9\") pod \"redhat-operators-dxnzn\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:03 crc kubenswrapper[4958]: I1206 06:52:03.600932 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:04 crc kubenswrapper[4958]: I1206 06:52:04.070108 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:04 crc kubenswrapper[4958]: I1206 06:52:04.721990 4958 generic.go:334] "Generic (PLEG): container finished" podID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerID="61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704" exitCode=0 Dec 06 06:52:04 crc kubenswrapper[4958]: I1206 06:52:04.722078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerDied","Data":"61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704"} Dec 06 06:52:04 crc kubenswrapper[4958]: I1206 06:52:04.722167 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerStarted","Data":"ee14ecd006f3b09f3b381dc5ca226a327fdbd99964484154d4047d736affee1a"} Dec 06 06:52:04 crc kubenswrapper[4958]: I1206 06:52:04.724203 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 06:52:06 crc kubenswrapper[4958]: I1206 06:52:06.743898 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerStarted","Data":"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872"} Dec 06 06:52:09 crc kubenswrapper[4958]: I1206 06:52:09.781399 4958 generic.go:334] "Generic (PLEG): container finished" podID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerID="10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872" exitCode=0 Dec 06 06:52:09 crc kubenswrapper[4958]: I1206 06:52:09.782867 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerDied","Data":"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872"} Dec 06 06:52:09 crc kubenswrapper[4958]: I1206 06:52:09.866040 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:52:09 crc kubenswrapper[4958]: I1206 06:52:09.866298 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:52:13 crc kubenswrapper[4958]: I1206 06:52:13.822608 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerStarted","Data":"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973"} Dec 06 06:52:13 crc kubenswrapper[4958]: I1206 06:52:13.851829 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dxnzn" podStartSLOduration=2.5686543410000002 podStartE2EDuration="10.851806321s" podCreationTimestamp="2025-12-06 06:52:03 +0000 UTC" firstStartedPulling="2025-12-06 06:52:04.723764938 +0000 UTC m=+5035.257535751" lastFinishedPulling="2025-12-06 06:52:13.006916968 +0000 UTC m=+5043.540687731" observedRunningTime="2025-12-06 06:52:13.846605521 +0000 UTC m=+5044.380376304" watchObservedRunningTime="2025-12-06 06:52:13.851806321 +0000 UTC m=+5044.385577084" Dec 06 06:52:23 crc kubenswrapper[4958]: I1206 06:52:23.602703 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:23 crc kubenswrapper[4958]: I1206 06:52:23.603307 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:23 crc kubenswrapper[4958]: I1206 06:52:23.671994 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:23 crc kubenswrapper[4958]: I1206 06:52:23.955904 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:24 crc kubenswrapper[4958]: I1206 06:52:24.673169 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:25 crc kubenswrapper[4958]: I1206 06:52:25.932617 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dxnzn" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="registry-server" containerID="cri-o://bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973" gracePeriod=2 Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.387287 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.554268 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content\") pod \"80c8e34b-51ab-4070-aeb2-8e551730b883\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.554534 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities\") pod \"80c8e34b-51ab-4070-aeb2-8e551730b883\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.554688 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfpc9\" (UniqueName: \"kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9\") pod \"80c8e34b-51ab-4070-aeb2-8e551730b883\" (UID: \"80c8e34b-51ab-4070-aeb2-8e551730b883\") " Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.557053 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities" (OuterVolumeSpecName: "utilities") pod "80c8e34b-51ab-4070-aeb2-8e551730b883" (UID: "80c8e34b-51ab-4070-aeb2-8e551730b883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.562634 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9" (OuterVolumeSpecName: "kube-api-access-kfpc9") pod "80c8e34b-51ab-4070-aeb2-8e551730b883" (UID: "80c8e34b-51ab-4070-aeb2-8e551730b883"). InnerVolumeSpecName "kube-api-access-kfpc9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.657088 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.657130 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfpc9\" (UniqueName: \"kubernetes.io/projected/80c8e34b-51ab-4070-aeb2-8e551730b883-kube-api-access-kfpc9\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.670693 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "80c8e34b-51ab-4070-aeb2-8e551730b883" (UID: "80c8e34b-51ab-4070-aeb2-8e551730b883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.759431 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/80c8e34b-51ab-4070-aeb2-8e551730b883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.947330 4958 generic.go:334] "Generic (PLEG): container finished" podID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerID="bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973" exitCode=0 Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.947387 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerDied","Data":"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973"} Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.947693 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dxnzn" event={"ID":"80c8e34b-51ab-4070-aeb2-8e551730b883","Type":"ContainerDied","Data":"ee14ecd006f3b09f3b381dc5ca226a327fdbd99964484154d4047d736affee1a"} Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.947724 4958 scope.go:117] "RemoveContainer" containerID="bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.947434 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dxnzn" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.973700 4958 scope.go:117] "RemoveContainer" containerID="10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872" Dec 06 06:52:26 crc kubenswrapper[4958]: I1206 06:52:26.985948 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.005424 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dxnzn"] Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.012696 4958 scope.go:117] "RemoveContainer" containerID="61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.069221 4958 scope.go:117] "RemoveContainer" containerID="bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973" Dec 06 06:52:27 crc kubenswrapper[4958]: E1206 06:52:27.070687 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973\": container with ID starting with bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973 not found: ID does not exist" containerID="bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.070731 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973"} err="failed to get container status \"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973\": rpc error: code = NotFound desc = could not find container \"bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973\": container with ID starting with bd5d3976cd2c733fdac6e23333ba306ddfc8146118e069d4b3c04c3c5c002973 not found: ID does not exist" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.070762 4958 scope.go:117] "RemoveContainer" containerID="10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872" Dec 06 06:52:27 crc kubenswrapper[4958]: E1206 06:52:27.071070 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872\": container with ID starting with 10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872 not found: ID does not exist" containerID="10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.071099 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872"} err="failed to get container status \"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872\": rpc error: code = NotFound desc = could not find container \"10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872\": container with ID starting with 10bddc7b060e3134e48f6af0352a902456b8e9243c211f0c2892262af0898872 not found: ID does not exist" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.071117 4958 scope.go:117] "RemoveContainer" containerID="61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704" Dec 06 06:52:27 crc kubenswrapper[4958]: E1206 06:52:27.071487 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704\": container with ID starting with 61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704 not found: ID does not exist" containerID="61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.071540 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704"} err="failed to get container status \"61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704\": rpc error: code = NotFound desc = could not find container \"61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704\": container with ID starting with 61a107dcfcb9ab57b56eb87df99eea13bbb17ddcaba5f1339a69c0221cfb1704 not found: ID does not exist" Dec 06 06:52:27 crc kubenswrapper[4958]: I1206 06:52:27.805776 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" path="/var/lib/kubelet/pods/80c8e34b-51ab-4070-aeb2-8e551730b883/volumes" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.083001 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:37 crc kubenswrapper[4958]: E1206 06:52:37.084729 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="registry-server" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.084810 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="registry-server" Dec 06 06:52:37 crc kubenswrapper[4958]: E1206 06:52:37.084883 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="extract-utilities" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.084945 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="extract-utilities" Dec 06 06:52:37 crc kubenswrapper[4958]: E1206 06:52:37.085018 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="extract-content" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.085072 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="extract-content" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.085336 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="80c8e34b-51ab-4070-aeb2-8e551730b883" containerName="registry-server" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.087270 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.096823 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.272177 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.272593 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8lhk\" (UniqueName: \"kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.272802 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.374914 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.374988 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8lhk\" (UniqueName: \"kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.375121 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.375363 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.375772 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.404275 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8lhk\" (UniqueName: \"kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk\") pod \"community-operators-jjlnh\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.417149 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:37 crc kubenswrapper[4958]: I1206 06:52:37.923152 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:38 crc kubenswrapper[4958]: I1206 06:52:38.052782 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerStarted","Data":"767d45c4c2d29e1c6370fb37784b452b00f8a8f03bc724beea13847e8624d269"} Dec 06 06:52:39 crc kubenswrapper[4958]: I1206 06:52:39.062790 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerID="dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3" exitCode=0 Dec 06 06:52:39 crc kubenswrapper[4958]: I1206 06:52:39.062852 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerDied","Data":"dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3"} Dec 06 06:52:39 crc kubenswrapper[4958]: I1206 06:52:39.865973 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:52:39 crc kubenswrapper[4958]: I1206 06:52:39.866668 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:52:40 crc kubenswrapper[4958]: I1206 06:52:40.072890 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerStarted","Data":"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2"} Dec 06 06:52:41 crc kubenswrapper[4958]: I1206 06:52:41.088641 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerID="091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2" exitCode=0 Dec 06 06:52:41 crc kubenswrapper[4958]: I1206 06:52:41.088708 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerDied","Data":"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2"} Dec 06 06:52:42 crc kubenswrapper[4958]: I1206 06:52:42.100672 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerStarted","Data":"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635"} Dec 06 06:52:42 crc kubenswrapper[4958]: I1206 06:52:42.131659 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jjlnh" podStartSLOduration=2.603158905 podStartE2EDuration="5.131642s" podCreationTimestamp="2025-12-06 06:52:37 +0000 UTC" firstStartedPulling="2025-12-06 06:52:39.066488356 +0000 UTC m=+5069.600259119" lastFinishedPulling="2025-12-06 06:52:41.594971431 +0000 UTC m=+5072.128742214" observedRunningTime="2025-12-06 06:52:42.117717984 +0000 UTC m=+5072.651488767" watchObservedRunningTime="2025-12-06 06:52:42.131642 +0000 UTC m=+5072.665412763" Dec 06 06:52:47 crc kubenswrapper[4958]: I1206 06:52:47.418143 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:47 crc kubenswrapper[4958]: I1206 06:52:47.418738 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:47 crc kubenswrapper[4958]: I1206 06:52:47.474025 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:48 crc kubenswrapper[4958]: I1206 06:52:48.199223 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:48 crc kubenswrapper[4958]: I1206 06:52:48.264687 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.178829 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jjlnh" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="registry-server" containerID="cri-o://ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635" gracePeriod=2 Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.633545 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.669837 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities\") pod \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.670023 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content\") pod \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.670094 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8lhk\" (UniqueName: \"kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk\") pod \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\" (UID: \"c8b7c9ac-09fd-4ffb-9959-997d7a17a903\") " Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.671461 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities" (OuterVolumeSpecName: "utilities") pod "c8b7c9ac-09fd-4ffb-9959-997d7a17a903" (UID: "c8b7c9ac-09fd-4ffb-9959-997d7a17a903"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.677823 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk" (OuterVolumeSpecName: "kube-api-access-h8lhk") pod "c8b7c9ac-09fd-4ffb-9959-997d7a17a903" (UID: "c8b7c9ac-09fd-4ffb-9959-997d7a17a903"). InnerVolumeSpecName "kube-api-access-h8lhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.772913 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.772950 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8lhk\" (UniqueName: \"kubernetes.io/projected/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-kube-api-access-h8lhk\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.800646 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8b7c9ac-09fd-4ffb-9959-997d7a17a903" (UID: "c8b7c9ac-09fd-4ffb-9959-997d7a17a903"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:52:50 crc kubenswrapper[4958]: I1206 06:52:50.875041 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8b7c9ac-09fd-4ffb-9959-997d7a17a903-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.193778 4958 generic.go:334] "Generic (PLEG): container finished" podID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerID="ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635" exitCode=0 Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.193822 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerDied","Data":"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635"} Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.193850 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jjlnh" event={"ID":"c8b7c9ac-09fd-4ffb-9959-997d7a17a903","Type":"ContainerDied","Data":"767d45c4c2d29e1c6370fb37784b452b00f8a8f03bc724beea13847e8624d269"} Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.193871 4958 scope.go:117] "RemoveContainer" containerID="ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.193871 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jjlnh" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.238215 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.239217 4958 scope.go:117] "RemoveContainer" containerID="091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.250665 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jjlnh"] Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.276275 4958 scope.go:117] "RemoveContainer" containerID="dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.318922 4958 scope.go:117] "RemoveContainer" containerID="ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635" Dec 06 06:52:51 crc kubenswrapper[4958]: E1206 06:52:51.319319 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635\": container with ID starting with ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635 not found: ID does not exist" containerID="ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.319352 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635"} err="failed to get container status \"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635\": rpc error: code = NotFound desc = could not find container \"ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635\": container with ID starting with ab7ea84d2e15bfe40719c487b1c633859ffc87a0dda8ea37cea861fec1a37635 not found: ID does not exist" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.319373 4958 scope.go:117] "RemoveContainer" containerID="091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2" Dec 06 06:52:51 crc kubenswrapper[4958]: E1206 06:52:51.319574 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2\": container with ID starting with 091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2 not found: ID does not exist" containerID="091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.319597 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2"} err="failed to get container status \"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2\": rpc error: code = NotFound desc = could not find container \"091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2\": container with ID starting with 091fe812c760e9d82a825d0442dcf9a553df57b1a433d514a00db0a53af339d2 not found: ID does not exist" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.319612 4958 scope.go:117] "RemoveContainer" containerID="dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3" Dec 06 06:52:51 crc kubenswrapper[4958]: E1206 06:52:51.320113 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3\": container with ID starting with dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3 not found: ID does not exist" containerID="dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.320137 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3"} err="failed to get container status \"dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3\": rpc error: code = NotFound desc = could not find container \"dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3\": container with ID starting with dd38d0b612ab05dea63e7771dfb7bd96300eab6dae44ad6eef7e33b2a80a27a3 not found: ID does not exist" Dec 06 06:52:51 crc kubenswrapper[4958]: I1206 06:52:51.773961 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" path="/var/lib/kubelet/pods/c8b7c9ac-09fd-4ffb-9959-997d7a17a903/volumes" Dec 06 06:53:09 crc kubenswrapper[4958]: I1206 06:53:09.866551 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:53:09 crc kubenswrapper[4958]: I1206 06:53:09.867018 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:53:09 crc kubenswrapper[4958]: I1206 06:53:09.867065 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:53:09 crc kubenswrapper[4958]: I1206 06:53:09.867893 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:53:09 crc kubenswrapper[4958]: I1206 06:53:09.867977 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6" gracePeriod=600 Dec 06 06:53:10 crc kubenswrapper[4958]: I1206 06:53:10.382491 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6" exitCode=0 Dec 06 06:53:10 crc kubenswrapper[4958]: I1206 06:53:10.382506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6"} Dec 06 06:53:10 crc kubenswrapper[4958]: I1206 06:53:10.382830 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3"} Dec 06 06:53:10 crc kubenswrapper[4958]: I1206 06:53:10.382857 4958 scope.go:117] "RemoveContainer" containerID="5d6c31737106f2594603f3f7564d76a1ec493993acbac9ea617bdb25ef714e65" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.874356 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:32 crc kubenswrapper[4958]: E1206 06:55:32.875410 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="registry-server" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.875424 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="registry-server" Dec 06 06:55:32 crc kubenswrapper[4958]: E1206 06:55:32.875460 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="extract-utilities" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.875484 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="extract-utilities" Dec 06 06:55:32 crc kubenswrapper[4958]: E1206 06:55:32.875496 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="extract-content" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.875504 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="extract-content" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.875746 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b7c9ac-09fd-4ffb-9959-997d7a17a903" containerName="registry-server" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.877446 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.908576 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.955061 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.955314 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65ng\" (UniqueName: \"kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:32 crc kubenswrapper[4958]: I1206 06:55:32.955428 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.057170 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.057321 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.057393 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65ng\" (UniqueName: \"kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.057997 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.058106 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.076071 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65ng\" (UniqueName: \"kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng\") pod \"certified-operators-jk8sf\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.209702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.746942 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:33 crc kubenswrapper[4958]: W1206 06:55:33.752112 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12552959_db18_4f51_be52_d619359ebf3d.slice/crio-e59b4d450f80c5523efd6b92b7b72f62cb7a1fa410fd1b80afa24d995c7e370f WatchSource:0}: Error finding container e59b4d450f80c5523efd6b92b7b72f62cb7a1fa410fd1b80afa24d995c7e370f: Status 404 returned error can't find the container with id e59b4d450f80c5523efd6b92b7b72f62cb7a1fa410fd1b80afa24d995c7e370f Dec 06 06:55:33 crc kubenswrapper[4958]: I1206 06:55:33.808417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerStarted","Data":"e59b4d450f80c5523efd6b92b7b72f62cb7a1fa410fd1b80afa24d995c7e370f"} Dec 06 06:55:34 crc kubenswrapper[4958]: I1206 06:55:34.836405 4958 generic.go:334] "Generic (PLEG): container finished" podID="12552959-db18-4f51-be52-d619359ebf3d" containerID="5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e" exitCode=0 Dec 06 06:55:34 crc kubenswrapper[4958]: I1206 06:55:34.836884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerDied","Data":"5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e"} Dec 06 06:55:35 crc kubenswrapper[4958]: I1206 06:55:35.848769 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerStarted","Data":"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049"} Dec 06 06:55:36 crc kubenswrapper[4958]: I1206 06:55:36.857947 4958 generic.go:334] "Generic (PLEG): container finished" podID="12552959-db18-4f51-be52-d619359ebf3d" containerID="37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049" exitCode=0 Dec 06 06:55:36 crc kubenswrapper[4958]: I1206 06:55:36.857996 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerDied","Data":"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049"} Dec 06 06:55:37 crc kubenswrapper[4958]: I1206 06:55:37.871614 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerStarted","Data":"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383"} Dec 06 06:55:37 crc kubenswrapper[4958]: I1206 06:55:37.899879 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jk8sf" podStartSLOduration=3.462422628 podStartE2EDuration="5.899852331s" podCreationTimestamp="2025-12-06 06:55:32 +0000 UTC" firstStartedPulling="2025-12-06 06:55:34.839356274 +0000 UTC m=+5245.373127077" lastFinishedPulling="2025-12-06 06:55:37.276786007 +0000 UTC m=+5247.810556780" observedRunningTime="2025-12-06 06:55:37.896055688 +0000 UTC m=+5248.429826461" watchObservedRunningTime="2025-12-06 06:55:37.899852331 +0000 UTC m=+5248.433623094" Dec 06 06:55:39 crc kubenswrapper[4958]: I1206 06:55:39.865959 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:55:39 crc kubenswrapper[4958]: I1206 06:55:39.866338 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.324793 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.327051 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.342824 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.421922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.422140 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.422246 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9szn\" (UniqueName: \"kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.524493 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9szn\" (UniqueName: \"kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.524844 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.525014 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.525289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.525430 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.545648 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9szn\" (UniqueName: \"kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn\") pod \"redhat-marketplace-6xzwn\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:40 crc kubenswrapper[4958]: I1206 06:55:40.649192 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:41 crc kubenswrapper[4958]: I1206 06:55:41.151443 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:41 crc kubenswrapper[4958]: I1206 06:55:41.917934 4958 generic.go:334] "Generic (PLEG): container finished" podID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerID="4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9" exitCode=0 Dec 06 06:55:41 crc kubenswrapper[4958]: I1206 06:55:41.918261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerDied","Data":"4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9"} Dec 06 06:55:41 crc kubenswrapper[4958]: I1206 06:55:41.918294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerStarted","Data":"22e28e4daac05b9cb376d68929858c7e969c4aa5da8cdfc08d2b01d1fe5db766"} Dec 06 06:55:42 crc kubenswrapper[4958]: I1206 06:55:42.929092 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerStarted","Data":"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead"} Dec 06 06:55:43 crc kubenswrapper[4958]: I1206 06:55:43.210225 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:43 crc kubenswrapper[4958]: I1206 06:55:43.210539 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:43 crc kubenswrapper[4958]: I1206 06:55:43.264298 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:43 crc kubenswrapper[4958]: I1206 06:55:43.940862 4958 generic.go:334] "Generic (PLEG): container finished" podID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerID="ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead" exitCode=0 Dec 06 06:55:43 crc kubenswrapper[4958]: I1206 06:55:43.941016 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerDied","Data":"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead"} Dec 06 06:55:44 crc kubenswrapper[4958]: I1206 06:55:44.010487 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:44 crc kubenswrapper[4958]: I1206 06:55:44.956259 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerStarted","Data":"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06"} Dec 06 06:55:44 crc kubenswrapper[4958]: I1206 06:55:44.984584 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6xzwn" podStartSLOduration=2.551883027 podStartE2EDuration="4.984554354s" podCreationTimestamp="2025-12-06 06:55:40 +0000 UTC" firstStartedPulling="2025-12-06 06:55:41.923928712 +0000 UTC m=+5252.457699475" lastFinishedPulling="2025-12-06 06:55:44.356600039 +0000 UTC m=+5254.890370802" observedRunningTime="2025-12-06 06:55:44.977597106 +0000 UTC m=+5255.511367919" watchObservedRunningTime="2025-12-06 06:55:44.984554354 +0000 UTC m=+5255.518325137" Dec 06 06:55:45 crc kubenswrapper[4958]: I1206 06:55:45.634045 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:45 crc kubenswrapper[4958]: I1206 06:55:45.968389 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jk8sf" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="registry-server" containerID="cri-o://6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383" gracePeriod=2 Dec 06 06:55:46 crc kubenswrapper[4958]: I1206 06:55:46.932401 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.006790 4958 generic.go:334] "Generic (PLEG): container finished" podID="12552959-db18-4f51-be52-d619359ebf3d" containerID="6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383" exitCode=0 Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.006866 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerDied","Data":"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383"} Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.006906 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jk8sf" event={"ID":"12552959-db18-4f51-be52-d619359ebf3d","Type":"ContainerDied","Data":"e59b4d450f80c5523efd6b92b7b72f62cb7a1fa410fd1b80afa24d995c7e370f"} Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.006929 4958 scope.go:117] "RemoveContainer" containerID="6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.007199 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jk8sf" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.034490 4958 scope.go:117] "RemoveContainer" containerID="37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.055762 4958 scope.go:117] "RemoveContainer" containerID="5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.076951 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content\") pod \"12552959-db18-4f51-be52-d619359ebf3d\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.077152 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t65ng\" (UniqueName: \"kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng\") pod \"12552959-db18-4f51-be52-d619359ebf3d\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.077429 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities\") pod \"12552959-db18-4f51-be52-d619359ebf3d\" (UID: \"12552959-db18-4f51-be52-d619359ebf3d\") " Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.078375 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities" (OuterVolumeSpecName: "utilities") pod "12552959-db18-4f51-be52-d619359ebf3d" (UID: "12552959-db18-4f51-be52-d619359ebf3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.078972 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.083435 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng" (OuterVolumeSpecName: "kube-api-access-t65ng") pod "12552959-db18-4f51-be52-d619359ebf3d" (UID: "12552959-db18-4f51-be52-d619359ebf3d"). InnerVolumeSpecName "kube-api-access-t65ng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.126145 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12552959-db18-4f51-be52-d619359ebf3d" (UID: "12552959-db18-4f51-be52-d619359ebf3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.156180 4958 scope.go:117] "RemoveContainer" containerID="6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383" Dec 06 06:55:47 crc kubenswrapper[4958]: E1206 06:55:47.156654 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383\": container with ID starting with 6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383 not found: ID does not exist" containerID="6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.156704 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383"} err="failed to get container status \"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383\": rpc error: code = NotFound desc = could not find container \"6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383\": container with ID starting with 6122bb994a6c1783eaa93647b46dfdc385f2981fd21e806298931c7026390383 not found: ID does not exist" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.156736 4958 scope.go:117] "RemoveContainer" containerID="37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049" Dec 06 06:55:47 crc kubenswrapper[4958]: E1206 06:55:47.157163 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049\": container with ID starting with 37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049 not found: ID does not exist" containerID="37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.157313 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049"} err="failed to get container status \"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049\": rpc error: code = NotFound desc = could not find container \"37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049\": container with ID starting with 37f36d0324bb8750c50283279e84c51067db165cca2522865e777dbb2fe5a049 not found: ID does not exist" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.157431 4958 scope.go:117] "RemoveContainer" containerID="5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e" Dec 06 06:55:47 crc kubenswrapper[4958]: E1206 06:55:47.157868 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e\": container with ID starting with 5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e not found: ID does not exist" containerID="5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.157895 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e"} err="failed to get container status \"5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e\": rpc error: code = NotFound desc = could not find container \"5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e\": container with ID starting with 5fc7c32488d83fcb9f8a558c5f3df1e40519c206e1039fd5bab05c1abaa2fd1e not found: ID does not exist" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.180605 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t65ng\" (UniqueName: \"kubernetes.io/projected/12552959-db18-4f51-be52-d619359ebf3d-kube-api-access-t65ng\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.180904 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12552959-db18-4f51-be52-d619359ebf3d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.354493 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.362778 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jk8sf"] Dec 06 06:55:47 crc kubenswrapper[4958]: I1206 06:55:47.774545 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12552959-db18-4f51-be52-d619359ebf3d" path="/var/lib/kubelet/pods/12552959-db18-4f51-be52-d619359ebf3d/volumes" Dec 06 06:55:50 crc kubenswrapper[4958]: I1206 06:55:50.650311 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:50 crc kubenswrapper[4958]: I1206 06:55:50.651107 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:50 crc kubenswrapper[4958]: I1206 06:55:50.718307 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:51 crc kubenswrapper[4958]: I1206 06:55:51.092699 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:51 crc kubenswrapper[4958]: I1206 06:55:51.138325 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.074147 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6xzwn" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="registry-server" containerID="cri-o://7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06" gracePeriod=2 Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.528701 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.611019 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content\") pod \"c6927575-77c1-4d6e-a68d-276b0750ae28\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.611066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9szn\" (UniqueName: \"kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn\") pod \"c6927575-77c1-4d6e-a68d-276b0750ae28\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.611246 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities\") pod \"c6927575-77c1-4d6e-a68d-276b0750ae28\" (UID: \"c6927575-77c1-4d6e-a68d-276b0750ae28\") " Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.612487 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities" (OuterVolumeSpecName: "utilities") pod "c6927575-77c1-4d6e-a68d-276b0750ae28" (UID: "c6927575-77c1-4d6e-a68d-276b0750ae28"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.618017 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn" (OuterVolumeSpecName: "kube-api-access-f9szn") pod "c6927575-77c1-4d6e-a68d-276b0750ae28" (UID: "c6927575-77c1-4d6e-a68d-276b0750ae28"). InnerVolumeSpecName "kube-api-access-f9szn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.632825 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6927575-77c1-4d6e-a68d-276b0750ae28" (UID: "c6927575-77c1-4d6e-a68d-276b0750ae28"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.713894 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.713949 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6927575-77c1-4d6e-a68d-276b0750ae28-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:53 crc kubenswrapper[4958]: I1206 06:55:53.713960 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9szn\" (UniqueName: \"kubernetes.io/projected/c6927575-77c1-4d6e-a68d-276b0750ae28-kube-api-access-f9szn\") on node \"crc\" DevicePath \"\"" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.087821 4958 generic.go:334] "Generic (PLEG): container finished" podID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerID="7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06" exitCode=0 Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.087861 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerDied","Data":"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06"} Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.087891 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6xzwn" event={"ID":"c6927575-77c1-4d6e-a68d-276b0750ae28","Type":"ContainerDied","Data":"22e28e4daac05b9cb376d68929858c7e969c4aa5da8cdfc08d2b01d1fe5db766"} Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.087894 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6xzwn" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.087920 4958 scope.go:117] "RemoveContainer" containerID="7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.111859 4958 scope.go:117] "RemoveContainer" containerID="ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.118157 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.127105 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6xzwn"] Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.136224 4958 scope.go:117] "RemoveContainer" containerID="4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.191903 4958 scope.go:117] "RemoveContainer" containerID="7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06" Dec 06 06:55:54 crc kubenswrapper[4958]: E1206 06:55:54.192374 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06\": container with ID starting with 7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06 not found: ID does not exist" containerID="7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.192413 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06"} err="failed to get container status \"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06\": rpc error: code = NotFound desc = could not find container \"7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06\": container with ID starting with 7559f2566b3004389001c77257a2a02130486ad75f25cb896cabb938e4bcfd06 not found: ID does not exist" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.192443 4958 scope.go:117] "RemoveContainer" containerID="ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead" Dec 06 06:55:54 crc kubenswrapper[4958]: E1206 06:55:54.192881 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead\": container with ID starting with ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead not found: ID does not exist" containerID="ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.192931 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead"} err="failed to get container status \"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead\": rpc error: code = NotFound desc = could not find container \"ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead\": container with ID starting with ac1ba7335732a5a19a8348a36ed88ea537d766edfdf09e9d89439a252b94fead not found: ID does not exist" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.192964 4958 scope.go:117] "RemoveContainer" containerID="4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9" Dec 06 06:55:54 crc kubenswrapper[4958]: E1206 06:55:54.193278 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9\": container with ID starting with 4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9 not found: ID does not exist" containerID="4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9" Dec 06 06:55:54 crc kubenswrapper[4958]: I1206 06:55:54.193312 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9"} err="failed to get container status \"4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9\": rpc error: code = NotFound desc = could not find container \"4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9\": container with ID starting with 4e5f49a365f2597f559b322b67361faed3cf929d0a79f43b1f9a6dcf233585d9 not found: ID does not exist" Dec 06 06:55:55 crc kubenswrapper[4958]: I1206 06:55:55.779291 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" path="/var/lib/kubelet/pods/c6927575-77c1-4d6e-a68d-276b0750ae28/volumes" Dec 06 06:56:09 crc kubenswrapper[4958]: I1206 06:56:09.866350 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:56:09 crc kubenswrapper[4958]: I1206 06:56:09.867035 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:56:39 crc kubenswrapper[4958]: I1206 06:56:39.866145 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 06:56:39 crc kubenswrapper[4958]: I1206 06:56:39.866733 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 06:56:39 crc kubenswrapper[4958]: I1206 06:56:39.866781 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 06:56:39 crc kubenswrapper[4958]: I1206 06:56:39.867585 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 06:56:39 crc kubenswrapper[4958]: I1206 06:56:39.867651 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" gracePeriod=600 Dec 06 06:56:39 crc kubenswrapper[4958]: E1206 06:56:39.996330 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:56:40 crc kubenswrapper[4958]: I1206 06:56:40.593642 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" exitCode=0 Dec 06 06:56:40 crc kubenswrapper[4958]: I1206 06:56:40.593687 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3"} Dec 06 06:56:40 crc kubenswrapper[4958]: I1206 06:56:40.593721 4958 scope.go:117] "RemoveContainer" containerID="5f5e2752cba5a5e4cf7daeb51d89293c28ce60c80c5b7b92efd3e0a9f1943fa6" Dec 06 06:56:40 crc kubenswrapper[4958]: I1206 06:56:40.594329 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:56:40 crc kubenswrapper[4958]: E1206 06:56:40.594615 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:56:51 crc kubenswrapper[4958]: I1206 06:56:51.761866 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:56:51 crc kubenswrapper[4958]: E1206 06:56:51.765871 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:57:03 crc kubenswrapper[4958]: I1206 06:57:03.762076 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:57:03 crc kubenswrapper[4958]: E1206 06:57:03.762788 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:57:17 crc kubenswrapper[4958]: I1206 06:57:17.762539 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:57:17 crc kubenswrapper[4958]: E1206 06:57:17.763522 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:57:31 crc kubenswrapper[4958]: I1206 06:57:31.762944 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:57:31 crc kubenswrapper[4958]: E1206 06:57:31.763970 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:57:42 crc kubenswrapper[4958]: I1206 06:57:42.761656 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:57:42 crc kubenswrapper[4958]: E1206 06:57:42.762460 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:57:56 crc kubenswrapper[4958]: I1206 06:57:56.761797 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:57:56 crc kubenswrapper[4958]: E1206 06:57:56.762644 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:58:11 crc kubenswrapper[4958]: I1206 06:58:11.763056 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:58:11 crc kubenswrapper[4958]: E1206 06:58:11.763969 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:58:24 crc kubenswrapper[4958]: I1206 06:58:24.761991 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:58:24 crc kubenswrapper[4958]: E1206 06:58:24.763089 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:58:38 crc kubenswrapper[4958]: I1206 06:58:38.762673 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:58:38 crc kubenswrapper[4958]: E1206 06:58:38.764708 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:58:51 crc kubenswrapper[4958]: I1206 06:58:51.762262 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:58:51 crc kubenswrapper[4958]: E1206 06:58:51.763060 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:59:04 crc kubenswrapper[4958]: I1206 06:59:04.762432 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:59:04 crc kubenswrapper[4958]: E1206 06:59:04.763337 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:59:18 crc kubenswrapper[4958]: I1206 06:59:18.762052 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:59:18 crc kubenswrapper[4958]: E1206 06:59:18.762843 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:59:33 crc kubenswrapper[4958]: I1206 06:59:33.762319 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:59:33 crc kubenswrapper[4958]: E1206 06:59:33.763150 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:59:45 crc kubenswrapper[4958]: I1206 06:59:45.763335 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:59:45 crc kubenswrapper[4958]: E1206 06:59:45.764736 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 06:59:56 crc kubenswrapper[4958]: I1206 06:59:56.763000 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 06:59:56 crc kubenswrapper[4958]: E1206 06:59:56.763766 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.151512 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl"] Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153351 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153371 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153428 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153439 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153458 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153508 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153531 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153540 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153631 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153642 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="extract-content" Dec 06 07:00:00 crc kubenswrapper[4958]: E1206 07:00:00.153700 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.153710 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="extract-utilities" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.154037 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="12552959-db18-4f51-be52-d619359ebf3d" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.154066 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6927575-77c1-4d6e-a68d-276b0750ae28" containerName="registry-server" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.154983 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.160869 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.161296 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.164328 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl"] Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.326803 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.327085 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.327315 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrmw\" (UniqueName: \"kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.429549 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.429641 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrmw\" (UniqueName: \"kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.429818 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.430909 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.435229 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.449939 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrmw\" (UniqueName: \"kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw\") pod \"collect-profiles-29416740-9msvl\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.485336 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:00 crc kubenswrapper[4958]: I1206 07:00:00.976113 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl"] Dec 06 07:00:01 crc kubenswrapper[4958]: I1206 07:00:01.589746 4958 generic.go:334] "Generic (PLEG): container finished" podID="3104e005-ca07-466c-982f-0a7a7e1b8eca" containerID="39d35730986908c64da29b6f6edbb4f34e6511eb36c499919f870fbcdf22cf8f" exitCode=0 Dec 06 07:00:01 crc kubenswrapper[4958]: I1206 07:00:01.589854 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" event={"ID":"3104e005-ca07-466c-982f-0a7a7e1b8eca","Type":"ContainerDied","Data":"39d35730986908c64da29b6f6edbb4f34e6511eb36c499919f870fbcdf22cf8f"} Dec 06 07:00:01 crc kubenswrapper[4958]: I1206 07:00:01.590384 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" event={"ID":"3104e005-ca07-466c-982f-0a7a7e1b8eca","Type":"ContainerStarted","Data":"c3a8d04b9d90a40c4a5ebd7f39fdc9c3b496cbbdf9a79c13bf81a9af2ee12e97"} Dec 06 07:00:02 crc kubenswrapper[4958]: I1206 07:00:02.976658 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.083615 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume\") pod \"3104e005-ca07-466c-982f-0a7a7e1b8eca\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.084026 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume\") pod \"3104e005-ca07-466c-982f-0a7a7e1b8eca\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.084210 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrmw\" (UniqueName: \"kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw\") pod \"3104e005-ca07-466c-982f-0a7a7e1b8eca\" (UID: \"3104e005-ca07-466c-982f-0a7a7e1b8eca\") " Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.085714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume" (OuterVolumeSpecName: "config-volume") pod "3104e005-ca07-466c-982f-0a7a7e1b8eca" (UID: "3104e005-ca07-466c-982f-0a7a7e1b8eca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.091523 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw" (OuterVolumeSpecName: "kube-api-access-xgrmw") pod "3104e005-ca07-466c-982f-0a7a7e1b8eca" (UID: "3104e005-ca07-466c-982f-0a7a7e1b8eca"). InnerVolumeSpecName "kube-api-access-xgrmw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.091715 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3104e005-ca07-466c-982f-0a7a7e1b8eca" (UID: "3104e005-ca07-466c-982f-0a7a7e1b8eca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.187046 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3104e005-ca07-466c-982f-0a7a7e1b8eca-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.187089 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3104e005-ca07-466c-982f-0a7a7e1b8eca-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.187101 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgrmw\" (UniqueName: \"kubernetes.io/projected/3104e005-ca07-466c-982f-0a7a7e1b8eca-kube-api-access-xgrmw\") on node \"crc\" DevicePath \"\"" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.617581 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.617584 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416740-9msvl" event={"ID":"3104e005-ca07-466c-982f-0a7a7e1b8eca","Type":"ContainerDied","Data":"c3a8d04b9d90a40c4a5ebd7f39fdc9c3b496cbbdf9a79c13bf81a9af2ee12e97"} Dec 06 07:00:03 crc kubenswrapper[4958]: I1206 07:00:03.618077 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3a8d04b9d90a40c4a5ebd7f39fdc9c3b496cbbdf9a79c13bf81a9af2ee12e97" Dec 06 07:00:04 crc kubenswrapper[4958]: I1206 07:00:04.055635 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs"] Dec 06 07:00:04 crc kubenswrapper[4958]: I1206 07:00:04.065411 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416695-8zcjs"] Dec 06 07:00:05 crc kubenswrapper[4958]: I1206 07:00:05.774798 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bee7a71b-67a6-47c5-80f5-e1ab66d05404" path="/var/lib/kubelet/pods/bee7a71b-67a6-47c5-80f5-e1ab66d05404/volumes" Dec 06 07:00:09 crc kubenswrapper[4958]: I1206 07:00:09.771655 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:00:09 crc kubenswrapper[4958]: E1206 07:00:09.774083 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:00:21 crc kubenswrapper[4958]: I1206 07:00:21.766313 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:00:21 crc kubenswrapper[4958]: E1206 07:00:21.767130 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:00:33 crc kubenswrapper[4958]: I1206 07:00:33.222660 4958 scope.go:117] "RemoveContainer" containerID="2562ddcacdc60910ec59d730b122cde8d9134584e86eb1f198cad34d3a8cedbe" Dec 06 07:00:34 crc kubenswrapper[4958]: I1206 07:00:34.761633 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:00:34 crc kubenswrapper[4958]: E1206 07:00:34.762228 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:00:49 crc kubenswrapper[4958]: I1206 07:00:49.772400 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:00:49 crc kubenswrapper[4958]: E1206 07:00:49.773254 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.168756 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29416741-ndxxt"] Dec 06 07:01:00 crc kubenswrapper[4958]: E1206 07:01:00.170013 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3104e005-ca07-466c-982f-0a7a7e1b8eca" containerName="collect-profiles" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.170031 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="3104e005-ca07-466c-982f-0a7a7e1b8eca" containerName="collect-profiles" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.170219 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="3104e005-ca07-466c-982f-0a7a7e1b8eca" containerName="collect-profiles" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.170957 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.183430 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416741-ndxxt"] Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.243229 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.243293 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.243366 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.243431 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w98t\" (UniqueName: \"kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.345676 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.345858 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.345941 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.346109 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w98t\" (UniqueName: \"kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.353402 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.353680 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.363141 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.367058 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w98t\" (UniqueName: \"kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t\") pod \"keystone-cron-29416741-ndxxt\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.500359 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:00 crc kubenswrapper[4958]: I1206 07:01:00.908778 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29416741-ndxxt"] Dec 06 07:01:01 crc kubenswrapper[4958]: I1206 07:01:01.211056 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-ndxxt" event={"ID":"205be56c-69f7-4c92-844c-e12d74da811b","Type":"ContainerStarted","Data":"f25a10d6961e234a5498093a234a0334faa659426efc517fbcf614d83bfdcfa3"} Dec 06 07:01:01 crc kubenswrapper[4958]: I1206 07:01:01.762573 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:01:01 crc kubenswrapper[4958]: E1206 07:01:01.763350 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:01:02 crc kubenswrapper[4958]: I1206 07:01:02.221373 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-ndxxt" event={"ID":"205be56c-69f7-4c92-844c-e12d74da811b","Type":"ContainerStarted","Data":"d11fafe6e06ebc6690b5978e4bbd42471ded9e7ee0cf7a57dab74babbdb22cfd"} Dec 06 07:01:02 crc kubenswrapper[4958]: I1206 07:01:02.236952 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29416741-ndxxt" podStartSLOduration=2.236931854 podStartE2EDuration="2.236931854s" podCreationTimestamp="2025-12-06 07:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:01:02.234927529 +0000 UTC m=+5572.768698292" watchObservedRunningTime="2025-12-06 07:01:02.236931854 +0000 UTC m=+5572.770702617" Dec 06 07:01:06 crc kubenswrapper[4958]: I1206 07:01:06.258528 4958 generic.go:334] "Generic (PLEG): container finished" podID="205be56c-69f7-4c92-844c-e12d74da811b" containerID="d11fafe6e06ebc6690b5978e4bbd42471ded9e7ee0cf7a57dab74babbdb22cfd" exitCode=0 Dec 06 07:01:06 crc kubenswrapper[4958]: I1206 07:01:06.258637 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-ndxxt" event={"ID":"205be56c-69f7-4c92-844c-e12d74da811b","Type":"ContainerDied","Data":"d11fafe6e06ebc6690b5978e4bbd42471ded9e7ee0cf7a57dab74babbdb22cfd"} Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.613911 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.724148 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys\") pod \"205be56c-69f7-4c92-844c-e12d74da811b\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.724222 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle\") pod \"205be56c-69f7-4c92-844c-e12d74da811b\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.724334 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data\") pod \"205be56c-69f7-4c92-844c-e12d74da811b\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.724410 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w98t\" (UniqueName: \"kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t\") pod \"205be56c-69f7-4c92-844c-e12d74da811b\" (UID: \"205be56c-69f7-4c92-844c-e12d74da811b\") " Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.729520 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t" (OuterVolumeSpecName: "kube-api-access-5w98t") pod "205be56c-69f7-4c92-844c-e12d74da811b" (UID: "205be56c-69f7-4c92-844c-e12d74da811b"). InnerVolumeSpecName "kube-api-access-5w98t". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.730102 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "205be56c-69f7-4c92-844c-e12d74da811b" (UID: "205be56c-69f7-4c92-844c-e12d74da811b"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.755853 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "205be56c-69f7-4c92-844c-e12d74da811b" (UID: "205be56c-69f7-4c92-844c-e12d74da811b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.790655 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data" (OuterVolumeSpecName: "config-data") pod "205be56c-69f7-4c92-844c-e12d74da811b" (UID: "205be56c-69f7-4c92-844c-e12d74da811b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.826856 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.826901 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w98t\" (UniqueName: \"kubernetes.io/projected/205be56c-69f7-4c92-844c-e12d74da811b-kube-api-access-5w98t\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.826915 4958 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:07 crc kubenswrapper[4958]: I1206 07:01:07.826925 4958 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/205be56c-69f7-4c92-844c-e12d74da811b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 06 07:01:08 crc kubenswrapper[4958]: I1206 07:01:08.282234 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29416741-ndxxt" event={"ID":"205be56c-69f7-4c92-844c-e12d74da811b","Type":"ContainerDied","Data":"f25a10d6961e234a5498093a234a0334faa659426efc517fbcf614d83bfdcfa3"} Dec 06 07:01:08 crc kubenswrapper[4958]: I1206 07:01:08.282278 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f25a10d6961e234a5498093a234a0334faa659426efc517fbcf614d83bfdcfa3" Dec 06 07:01:08 crc kubenswrapper[4958]: I1206 07:01:08.282348 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29416741-ndxxt" Dec 06 07:01:15 crc kubenswrapper[4958]: I1206 07:01:15.762932 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:01:15 crc kubenswrapper[4958]: E1206 07:01:15.764122 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:01:26 crc kubenswrapper[4958]: I1206 07:01:26.762870 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:01:26 crc kubenswrapper[4958]: E1206 07:01:26.764104 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:01:41 crc kubenswrapper[4958]: I1206 07:01:41.761992 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:01:42 crc kubenswrapper[4958]: I1206 07:01:42.611487 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df"} Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.794758 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:35 crc kubenswrapper[4958]: E1206 07:02:35.795978 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="205be56c-69f7-4c92-844c-e12d74da811b" containerName="keystone-cron" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.795995 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="205be56c-69f7-4c92-844c-e12d74da811b" containerName="keystone-cron" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.796272 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="205be56c-69f7-4c92-844c-e12d74da811b" containerName="keystone-cron" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.798488 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.810167 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.842398 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.842602 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.842709 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b28j\" (UniqueName: \"kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.944427 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b28j\" (UniqueName: \"kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.944545 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.944654 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.945403 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.945571 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:35 crc kubenswrapper[4958]: I1206 07:02:35.968582 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b28j\" (UniqueName: \"kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j\") pod \"redhat-operators-g76q4\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:36 crc kubenswrapper[4958]: I1206 07:02:36.125016 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:36 crc kubenswrapper[4958]: I1206 07:02:36.597706 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:37 crc kubenswrapper[4958]: I1206 07:02:37.130067 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerID="cca04cfda9bf739d2b3b945114b48839af50d70f53a42972b463fa86dd431df8" exitCode=0 Dec 06 07:02:37 crc kubenswrapper[4958]: I1206 07:02:37.130282 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerDied","Data":"cca04cfda9bf739d2b3b945114b48839af50d70f53a42972b463fa86dd431df8"} Dec 06 07:02:37 crc kubenswrapper[4958]: I1206 07:02:37.130371 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerStarted","Data":"52984598b53703d4f3a819b9bd8792dde8a851756e6c5d494525279b15a5b355"} Dec 06 07:02:37 crc kubenswrapper[4958]: I1206 07:02:37.133320 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:02:38 crc kubenswrapper[4958]: I1206 07:02:38.146055 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerStarted","Data":"4d2a0e4add1363de1fe88528b1b9f989c0c5091aaba3f0aaef596e597032eabe"} Dec 06 07:02:41 crc kubenswrapper[4958]: I1206 07:02:41.175237 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerID="4d2a0e4add1363de1fe88528b1b9f989c0c5091aaba3f0aaef596e597032eabe" exitCode=0 Dec 06 07:02:41 crc kubenswrapper[4958]: I1206 07:02:41.175294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerDied","Data":"4d2a0e4add1363de1fe88528b1b9f989c0c5091aaba3f0aaef596e597032eabe"} Dec 06 07:02:42 crc kubenswrapper[4958]: I1206 07:02:42.184923 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerStarted","Data":"864cfa02678807291811588871f44beb60dca077e6815d374b6a5a2a4df4d79b"} Dec 06 07:02:42 crc kubenswrapper[4958]: I1206 07:02:42.209667 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-g76q4" podStartSLOduration=2.7713446900000003 podStartE2EDuration="7.209641332s" podCreationTimestamp="2025-12-06 07:02:35 +0000 UTC" firstStartedPulling="2025-12-06 07:02:37.133110717 +0000 UTC m=+5667.666881470" lastFinishedPulling="2025-12-06 07:02:41.571407359 +0000 UTC m=+5672.105178112" observedRunningTime="2025-12-06 07:02:42.200838234 +0000 UTC m=+5672.734609017" watchObservedRunningTime="2025-12-06 07:02:42.209641332 +0000 UTC m=+5672.743412095" Dec 06 07:02:46 crc kubenswrapper[4958]: I1206 07:02:46.125181 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:46 crc kubenswrapper[4958]: I1206 07:02:46.125807 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:47 crc kubenswrapper[4958]: I1206 07:02:47.173193 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-g76q4" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="registry-server" probeResult="failure" output=< Dec 06 07:02:47 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 07:02:47 crc kubenswrapper[4958]: > Dec 06 07:02:48 crc kubenswrapper[4958]: I1206 07:02:48.999301 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.002085 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.010553 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.113349 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.113486 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.113778 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tdfg\" (UniqueName: \"kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.215777 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.215946 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tdfg\" (UniqueName: \"kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.215994 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.216385 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.216440 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.241223 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tdfg\" (UniqueName: \"kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg\") pod \"community-operators-nh8hd\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.364510 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:49 crc kubenswrapper[4958]: I1206 07:02:49.926625 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:02:50 crc kubenswrapper[4958]: I1206 07:02:50.257901 4958 generic.go:334] "Generic (PLEG): container finished" podID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerID="cae713a142456b275072d9b6a58ef6fbd47e6e928261819d858ed9253004e21e" exitCode=0 Dec 06 07:02:50 crc kubenswrapper[4958]: I1206 07:02:50.258070 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerDied","Data":"cae713a142456b275072d9b6a58ef6fbd47e6e928261819d858ed9253004e21e"} Dec 06 07:02:50 crc kubenswrapper[4958]: I1206 07:02:50.258193 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerStarted","Data":"5f586835fc7a06f48ffcb1c6590e264df5378a2a435277d3d43b24e175ad45aa"} Dec 06 07:02:51 crc kubenswrapper[4958]: I1206 07:02:51.268455 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerStarted","Data":"b40c4a2f247f97c2b8464cf131f3010053f46a4099352a4152e8de5179260353"} Dec 06 07:02:52 crc kubenswrapper[4958]: I1206 07:02:52.281076 4958 generic.go:334] "Generic (PLEG): container finished" podID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerID="b40c4a2f247f97c2b8464cf131f3010053f46a4099352a4152e8de5179260353" exitCode=0 Dec 06 07:02:52 crc kubenswrapper[4958]: I1206 07:02:52.281142 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerDied","Data":"b40c4a2f247f97c2b8464cf131f3010053f46a4099352a4152e8de5179260353"} Dec 06 07:02:53 crc kubenswrapper[4958]: I1206 07:02:53.292126 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerStarted","Data":"197c90920ea883fe991fb95804618c2c43109c6400a169378551552b0c310f32"} Dec 06 07:02:53 crc kubenswrapper[4958]: I1206 07:02:53.337929 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nh8hd" podStartSLOduration=2.865912885 podStartE2EDuration="5.337913935s" podCreationTimestamp="2025-12-06 07:02:48 +0000 UTC" firstStartedPulling="2025-12-06 07:02:50.260619614 +0000 UTC m=+5680.794390377" lastFinishedPulling="2025-12-06 07:02:52.732620664 +0000 UTC m=+5683.266391427" observedRunningTime="2025-12-06 07:02:53.336162248 +0000 UTC m=+5683.869933011" watchObservedRunningTime="2025-12-06 07:02:53.337913935 +0000 UTC m=+5683.871684698" Dec 06 07:02:56 crc kubenswrapper[4958]: I1206 07:02:56.187185 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:56 crc kubenswrapper[4958]: I1206 07:02:56.262710 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:57 crc kubenswrapper[4958]: I1206 07:02:57.377361 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:57 crc kubenswrapper[4958]: I1206 07:02:57.377596 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-g76q4" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="registry-server" containerID="cri-o://864cfa02678807291811588871f44beb60dca077e6815d374b6a5a2a4df4d79b" gracePeriod=2 Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.361282 4958 generic.go:334] "Generic (PLEG): container finished" podID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerID="864cfa02678807291811588871f44beb60dca077e6815d374b6a5a2a4df4d79b" exitCode=0 Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.361682 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerDied","Data":"864cfa02678807291811588871f44beb60dca077e6815d374b6a5a2a4df4d79b"} Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.361929 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-g76q4" event={"ID":"cc0978d2-c409-4e6e-898d-affe6ea620ba","Type":"ContainerDied","Data":"52984598b53703d4f3a819b9bd8792dde8a851756e6c5d494525279b15a5b355"} Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.361951 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="52984598b53703d4f3a819b9bd8792dde8a851756e6c5d494525279b15a5b355" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.361833 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.455518 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities\") pod \"cc0978d2-c409-4e6e-898d-affe6ea620ba\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.455587 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8b28j\" (UniqueName: \"kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j\") pod \"cc0978d2-c409-4e6e-898d-affe6ea620ba\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.455840 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content\") pod \"cc0978d2-c409-4e6e-898d-affe6ea620ba\" (UID: \"cc0978d2-c409-4e6e-898d-affe6ea620ba\") " Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.456544 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities" (OuterVolumeSpecName: "utilities") pod "cc0978d2-c409-4e6e-898d-affe6ea620ba" (UID: "cc0978d2-c409-4e6e-898d-affe6ea620ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.460876 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j" (OuterVolumeSpecName: "kube-api-access-8b28j") pod "cc0978d2-c409-4e6e-898d-affe6ea620ba" (UID: "cc0978d2-c409-4e6e-898d-affe6ea620ba"). InnerVolumeSpecName "kube-api-access-8b28j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.558364 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8b28j\" (UniqueName: \"kubernetes.io/projected/cc0978d2-c409-4e6e-898d-affe6ea620ba-kube-api-access-8b28j\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.558666 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.560981 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc0978d2-c409-4e6e-898d-affe6ea620ba" (UID: "cc0978d2-c409-4e6e-898d-affe6ea620ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:02:58 crc kubenswrapper[4958]: I1206 07:02:58.661193 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc0978d2-c409-4e6e-898d-affe6ea620ba-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.365503 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.365742 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.376121 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-g76q4" Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.414149 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.415106 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.430811 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-g76q4"] Dec 06 07:02:59 crc kubenswrapper[4958]: I1206 07:02:59.774754 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" path="/var/lib/kubelet/pods/cc0978d2-c409-4e6e-898d-affe6ea620ba/volumes" Dec 06 07:03:00 crc kubenswrapper[4958]: I1206 07:03:00.437227 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:03:01 crc kubenswrapper[4958]: I1206 07:03:01.785405 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:03:02 crc kubenswrapper[4958]: I1206 07:03:02.403901 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nh8hd" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="registry-server" containerID="cri-o://197c90920ea883fe991fb95804618c2c43109c6400a169378551552b0c310f32" gracePeriod=2 Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.416046 4958 generic.go:334] "Generic (PLEG): container finished" podID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerID="197c90920ea883fe991fb95804618c2c43109c6400a169378551552b0c310f32" exitCode=0 Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.416235 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerDied","Data":"197c90920ea883fe991fb95804618c2c43109c6400a169378551552b0c310f32"} Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.416599 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nh8hd" event={"ID":"12f6f6ae-2875-4d7b-8d59-0223e77830e2","Type":"ContainerDied","Data":"5f586835fc7a06f48ffcb1c6590e264df5378a2a435277d3d43b24e175ad45aa"} Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.416617 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f586835fc7a06f48ffcb1c6590e264df5378a2a435277d3d43b24e175ad45aa" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.421898 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.563220 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content\") pod \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.563349 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tdfg\" (UniqueName: \"kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg\") pod \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.563442 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities\") pod \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\" (UID: \"12f6f6ae-2875-4d7b-8d59-0223e77830e2\") " Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.564218 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities" (OuterVolumeSpecName: "utilities") pod "12f6f6ae-2875-4d7b-8d59-0223e77830e2" (UID: "12f6f6ae-2875-4d7b-8d59-0223e77830e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.569483 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg" (OuterVolumeSpecName: "kube-api-access-4tdfg") pod "12f6f6ae-2875-4d7b-8d59-0223e77830e2" (UID: "12f6f6ae-2875-4d7b-8d59-0223e77830e2"). InnerVolumeSpecName "kube-api-access-4tdfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.620526 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12f6f6ae-2875-4d7b-8d59-0223e77830e2" (UID: "12f6f6ae-2875-4d7b-8d59-0223e77830e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.666727 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.666765 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tdfg\" (UniqueName: \"kubernetes.io/projected/12f6f6ae-2875-4d7b-8d59-0223e77830e2-kube-api-access-4tdfg\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:03 crc kubenswrapper[4958]: I1206 07:03:03.666811 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f6f6ae-2875-4d7b-8d59-0223e77830e2-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:03:04 crc kubenswrapper[4958]: I1206 07:03:04.425002 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nh8hd" Dec 06 07:03:04 crc kubenswrapper[4958]: I1206 07:03:04.449998 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:03:04 crc kubenswrapper[4958]: I1206 07:03:04.458769 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nh8hd"] Dec 06 07:03:05 crc kubenswrapper[4958]: I1206 07:03:05.773938 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" path="/var/lib/kubelet/pods/12f6f6ae-2875-4d7b-8d59-0223e77830e2/volumes" Dec 06 07:04:09 crc kubenswrapper[4958]: I1206 07:04:09.866264 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:04:09 crc kubenswrapper[4958]: I1206 07:04:09.866780 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:04:39 crc kubenswrapper[4958]: I1206 07:04:39.866118 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:04:39 crc kubenswrapper[4958]: I1206 07:04:39.866705 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:05:09 crc kubenswrapper[4958]: I1206 07:05:09.866685 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:05:09 crc kubenswrapper[4958]: I1206 07:05:09.867450 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:05:09 crc kubenswrapper[4958]: I1206 07:05:09.867531 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 07:05:09 crc kubenswrapper[4958]: I1206 07:05:09.869695 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:05:09 crc kubenswrapper[4958]: I1206 07:05:09.869776 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df" gracePeriod=600 Dec 06 07:05:11 crc kubenswrapper[4958]: I1206 07:05:11.661137 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df" exitCode=0 Dec 06 07:05:11 crc kubenswrapper[4958]: I1206 07:05:11.661219 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df"} Dec 06 07:05:11 crc kubenswrapper[4958]: I1206 07:05:11.661681 4958 scope.go:117] "RemoveContainer" containerID="92e2d6a3a90b9ec515381fa1b95d3a975374f18f172fa5132eb011229fbb96c3" Dec 06 07:05:12 crc kubenswrapper[4958]: I1206 07:05:12.673859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61"} Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.475627 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476712 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476729 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476754 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="extract-content" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476762 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="extract-content" Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476775 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="extract-utilities" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476783 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="extract-utilities" Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476803 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476811 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476834 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="extract-content" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476841 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="extract-content" Dec 06 07:06:44 crc kubenswrapper[4958]: E1206 07:06:44.476875 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="extract-utilities" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.476882 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="extract-utilities" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.477100 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc0978d2-c409-4e6e-898d-affe6ea620ba" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.477136 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f6f6ae-2875-4d7b-8d59-0223e77830e2" containerName="registry-server" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.478936 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.489911 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.551510 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v66jf\" (UniqueName: \"kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.551782 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.552033 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.653928 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.654029 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v66jf\" (UniqueName: \"kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.654058 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.654577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.654833 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.674586 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v66jf\" (UniqueName: \"kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf\") pod \"redhat-marketplace-4w997\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:44 crc kubenswrapper[4958]: I1206 07:06:44.836360 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:45 crc kubenswrapper[4958]: I1206 07:06:45.386155 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:45 crc kubenswrapper[4958]: I1206 07:06:45.684983 4958 generic.go:334] "Generic (PLEG): container finished" podID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerID="2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4" exitCode=0 Dec 06 07:06:45 crc kubenswrapper[4958]: I1206 07:06:45.685296 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerDied","Data":"2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4"} Dec 06 07:06:45 crc kubenswrapper[4958]: I1206 07:06:45.685327 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerStarted","Data":"dd3e54d2f93de826ae978c20413c70bcbdf8a1e81a8e03db4c0c6b1c1dbab3c1"} Dec 06 07:06:47 crc kubenswrapper[4958]: I1206 07:06:47.706965 4958 generic.go:334] "Generic (PLEG): container finished" podID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerID="f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02" exitCode=0 Dec 06 07:06:47 crc kubenswrapper[4958]: I1206 07:06:47.707064 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerDied","Data":"f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02"} Dec 06 07:06:48 crc kubenswrapper[4958]: I1206 07:06:48.720947 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerStarted","Data":"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77"} Dec 06 07:06:48 crc kubenswrapper[4958]: I1206 07:06:48.752169 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4w997" podStartSLOduration=2.342049905 podStartE2EDuration="4.752147865s" podCreationTimestamp="2025-12-06 07:06:44 +0000 UTC" firstStartedPulling="2025-12-06 07:06:45.686742922 +0000 UTC m=+5916.220513685" lastFinishedPulling="2025-12-06 07:06:48.096840872 +0000 UTC m=+5918.630611645" observedRunningTime="2025-12-06 07:06:48.73973331 +0000 UTC m=+5919.273504093" watchObservedRunningTime="2025-12-06 07:06:48.752147865 +0000 UTC m=+5919.285918638" Dec 06 07:06:54 crc kubenswrapper[4958]: I1206 07:06:54.837209 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:54 crc kubenswrapper[4958]: I1206 07:06:54.837898 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:54 crc kubenswrapper[4958]: I1206 07:06:54.884874 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:55 crc kubenswrapper[4958]: I1206 07:06:55.858695 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:55 crc kubenswrapper[4958]: I1206 07:06:55.922877 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:57 crc kubenswrapper[4958]: I1206 07:06:57.806456 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4w997" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="registry-server" containerID="cri-o://7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77" gracePeriod=2 Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.293105 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.434974 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content\") pod \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.435221 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities\") pod \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.435290 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v66jf\" (UniqueName: \"kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf\") pod \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\" (UID: \"6977473b-06f3-42d6-ad87-d5f544e1f2e7\") " Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.435810 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities" (OuterVolumeSpecName: "utilities") pod "6977473b-06f3-42d6-ad87-d5f544e1f2e7" (UID: "6977473b-06f3-42d6-ad87-d5f544e1f2e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.441905 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf" (OuterVolumeSpecName: "kube-api-access-v66jf") pod "6977473b-06f3-42d6-ad87-d5f544e1f2e7" (UID: "6977473b-06f3-42d6-ad87-d5f544e1f2e7"). InnerVolumeSpecName "kube-api-access-v66jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.454806 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6977473b-06f3-42d6-ad87-d5f544e1f2e7" (UID: "6977473b-06f3-42d6-ad87-d5f544e1f2e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.538331 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.538383 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v66jf\" (UniqueName: \"kubernetes.io/projected/6977473b-06f3-42d6-ad87-d5f544e1f2e7-kube-api-access-v66jf\") on node \"crc\" DevicePath \"\"" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.538398 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6977473b-06f3-42d6-ad87-d5f544e1f2e7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.818711 4958 generic.go:334] "Generic (PLEG): container finished" podID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerID="7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77" exitCode=0 Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.818766 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerDied","Data":"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77"} Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.818824 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4w997" event={"ID":"6977473b-06f3-42d6-ad87-d5f544e1f2e7","Type":"ContainerDied","Data":"dd3e54d2f93de826ae978c20413c70bcbdf8a1e81a8e03db4c0c6b1c1dbab3c1"} Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.818844 4958 scope.go:117] "RemoveContainer" containerID="7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.818793 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4w997" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.842019 4958 scope.go:117] "RemoveContainer" containerID="f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.857257 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.878459 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4w997"] Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.878778 4958 scope.go:117] "RemoveContainer" containerID="2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.922772 4958 scope.go:117] "RemoveContainer" containerID="7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77" Dec 06 07:06:58 crc kubenswrapper[4958]: E1206 07:06:58.923278 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77\": container with ID starting with 7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77 not found: ID does not exist" containerID="7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.923314 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77"} err="failed to get container status \"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77\": rpc error: code = NotFound desc = could not find container \"7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77\": container with ID starting with 7ac2321563a8c17fbf7a7f0a161ee2f8057ea1273e2c1ddf37cb547ce4991e77 not found: ID does not exist" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.923335 4958 scope.go:117] "RemoveContainer" containerID="f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02" Dec 06 07:06:58 crc kubenswrapper[4958]: E1206 07:06:58.923643 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02\": container with ID starting with f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02 not found: ID does not exist" containerID="f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.923685 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02"} err="failed to get container status \"f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02\": rpc error: code = NotFound desc = could not find container \"f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02\": container with ID starting with f1ef0fa3176787fb266abe33b0d5a8ac5fb5ef47eb5b5f3cfe17522f41146e02 not found: ID does not exist" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.923715 4958 scope.go:117] "RemoveContainer" containerID="2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4" Dec 06 07:06:58 crc kubenswrapper[4958]: E1206 07:06:58.924168 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4\": container with ID starting with 2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4 not found: ID does not exist" containerID="2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4" Dec 06 07:06:58 crc kubenswrapper[4958]: I1206 07:06:58.924265 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4"} err="failed to get container status \"2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4\": rpc error: code = NotFound desc = could not find container \"2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4\": container with ID starting with 2af9f0a5600cab300f887ed07e4789814d6c0c07042a1bd7ae349b5725d1a3a4 not found: ID does not exist" Dec 06 07:06:59 crc kubenswrapper[4958]: I1206 07:06:59.771714 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" path="/var/lib/kubelet/pods/6977473b-06f3-42d6-ad87-d5f544e1f2e7/volumes" Dec 06 07:07:39 crc kubenswrapper[4958]: I1206 07:07:39.866348 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:07:39 crc kubenswrapper[4958]: I1206 07:07:39.866912 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.499336 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:42 crc kubenswrapper[4958]: E1206 07:07:42.500428 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="extract-content" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.500448 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="extract-content" Dec 06 07:07:42 crc kubenswrapper[4958]: E1206 07:07:42.501274 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="extract-utilities" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.501294 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="extract-utilities" Dec 06 07:07:42 crc kubenswrapper[4958]: E1206 07:07:42.501343 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="registry-server" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.501353 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="registry-server" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.501649 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6977473b-06f3-42d6-ad87-d5f544e1f2e7" containerName="registry-server" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.503490 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.521985 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.569429 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtktq\" (UniqueName: \"kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.569533 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.569685 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.671656 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtktq\" (UniqueName: \"kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.672116 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.672241 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.672664 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.672759 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.705940 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtktq\" (UniqueName: \"kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq\") pod \"certified-operators-h4htw\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:42 crc kubenswrapper[4958]: I1206 07:07:42.848022 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:43 crc kubenswrapper[4958]: I1206 07:07:43.402116 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:44 crc kubenswrapper[4958]: I1206 07:07:44.268196 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerID="7166389471db228a2ad183e7675af8339cc6e1c84a7901f2dfdb48349d6a8977" exitCode=0 Dec 06 07:07:44 crc kubenswrapper[4958]: I1206 07:07:44.268260 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerDied","Data":"7166389471db228a2ad183e7675af8339cc6e1c84a7901f2dfdb48349d6a8977"} Dec 06 07:07:44 crc kubenswrapper[4958]: I1206 07:07:44.268751 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerStarted","Data":"38a0d70f18abf242f4833dbd74ab8fda6b130ee7a8a9c5f7fba055ef8a450ea1"} Dec 06 07:07:44 crc kubenswrapper[4958]: I1206 07:07:44.271571 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:07:45 crc kubenswrapper[4958]: I1206 07:07:45.278240 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerStarted","Data":"8ab4a98f7151ad734fd62a47d987899449e75aa828eda7c0860f3acdbab38559"} Dec 06 07:07:46 crc kubenswrapper[4958]: I1206 07:07:46.290244 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerID="8ab4a98f7151ad734fd62a47d987899449e75aa828eda7c0860f3acdbab38559" exitCode=0 Dec 06 07:07:46 crc kubenswrapper[4958]: I1206 07:07:46.290341 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerDied","Data":"8ab4a98f7151ad734fd62a47d987899449e75aa828eda7c0860f3acdbab38559"} Dec 06 07:07:47 crc kubenswrapper[4958]: I1206 07:07:47.301414 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerStarted","Data":"3b4351a6061961de96fa3f5b2b47e06fdfa47bc391e65f6adcdeb73b0ed7e404"} Dec 06 07:07:47 crc kubenswrapper[4958]: I1206 07:07:47.336325 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4htw" podStartSLOduration=2.8961567009999998 podStartE2EDuration="5.336305694s" podCreationTimestamp="2025-12-06 07:07:42 +0000 UTC" firstStartedPulling="2025-12-06 07:07:44.27127958 +0000 UTC m=+5974.805050343" lastFinishedPulling="2025-12-06 07:07:46.711428563 +0000 UTC m=+5977.245199336" observedRunningTime="2025-12-06 07:07:47.328464263 +0000 UTC m=+5977.862235046" watchObservedRunningTime="2025-12-06 07:07:47.336305694 +0000 UTC m=+5977.870076457" Dec 06 07:07:52 crc kubenswrapper[4958]: I1206 07:07:52.848140 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:52 crc kubenswrapper[4958]: I1206 07:07:52.848760 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:52 crc kubenswrapper[4958]: I1206 07:07:52.901969 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:53 crc kubenswrapper[4958]: I1206 07:07:53.423364 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:53 crc kubenswrapper[4958]: I1206 07:07:53.472345 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:55 crc kubenswrapper[4958]: I1206 07:07:55.379578 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4htw" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="registry-server" containerID="cri-o://3b4351a6061961de96fa3f5b2b47e06fdfa47bc391e65f6adcdeb73b0ed7e404" gracePeriod=2 Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.391595 4958 generic.go:334] "Generic (PLEG): container finished" podID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerID="3b4351a6061961de96fa3f5b2b47e06fdfa47bc391e65f6adcdeb73b0ed7e404" exitCode=0 Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.391685 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerDied","Data":"3b4351a6061961de96fa3f5b2b47e06fdfa47bc391e65f6adcdeb73b0ed7e404"} Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.800040 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.859941 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gtktq\" (UniqueName: \"kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq\") pod \"1f148666-ba64-444a-bcbc-4d791e32bc84\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.859999 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities\") pod \"1f148666-ba64-444a-bcbc-4d791e32bc84\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.860083 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content\") pod \"1f148666-ba64-444a-bcbc-4d791e32bc84\" (UID: \"1f148666-ba64-444a-bcbc-4d791e32bc84\") " Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.861285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities" (OuterVolumeSpecName: "utilities") pod "1f148666-ba64-444a-bcbc-4d791e32bc84" (UID: "1f148666-ba64-444a-bcbc-4d791e32bc84"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.869706 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq" (OuterVolumeSpecName: "kube-api-access-gtktq") pod "1f148666-ba64-444a-bcbc-4d791e32bc84" (UID: "1f148666-ba64-444a-bcbc-4d791e32bc84"). InnerVolumeSpecName "kube-api-access-gtktq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.917586 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f148666-ba64-444a-bcbc-4d791e32bc84" (UID: "1f148666-ba64-444a-bcbc-4d791e32bc84"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.963715 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gtktq\" (UniqueName: \"kubernetes.io/projected/1f148666-ba64-444a-bcbc-4d791e32bc84-kube-api-access-gtktq\") on node \"crc\" DevicePath \"\"" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.963786 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:07:56 crc kubenswrapper[4958]: I1206 07:07:56.963798 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f148666-ba64-444a-bcbc-4d791e32bc84-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.408128 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4htw" event={"ID":"1f148666-ba64-444a-bcbc-4d791e32bc84","Type":"ContainerDied","Data":"38a0d70f18abf242f4833dbd74ab8fda6b130ee7a8a9c5f7fba055ef8a450ea1"} Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.408222 4958 scope.go:117] "RemoveContainer" containerID="3b4351a6061961de96fa3f5b2b47e06fdfa47bc391e65f6adcdeb73b0ed7e404" Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.408809 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4htw" Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.428217 4958 scope.go:117] "RemoveContainer" containerID="8ab4a98f7151ad734fd62a47d987899449e75aa828eda7c0860f3acdbab38559" Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.452074 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.466707 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4htw"] Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.483841 4958 scope.go:117] "RemoveContainer" containerID="7166389471db228a2ad183e7675af8339cc6e1c84a7901f2dfdb48349d6a8977" Dec 06 07:07:57 crc kubenswrapper[4958]: I1206 07:07:57.778207 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" path="/var/lib/kubelet/pods/1f148666-ba64-444a-bcbc-4d791e32bc84/volumes" Dec 06 07:08:09 crc kubenswrapper[4958]: I1206 07:08:09.866009 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:08:09 crc kubenswrapper[4958]: I1206 07:08:09.866719 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:08:39 crc kubenswrapper[4958]: I1206 07:08:39.866076 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:08:39 crc kubenswrapper[4958]: I1206 07:08:39.866745 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:08:39 crc kubenswrapper[4958]: I1206 07:08:39.866839 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 07:08:39 crc kubenswrapper[4958]: I1206 07:08:39.868307 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:08:39 crc kubenswrapper[4958]: I1206 07:08:39.868435 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" gracePeriod=600 Dec 06 07:08:39 crc kubenswrapper[4958]: E1206 07:08:39.997690 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:08:40 crc kubenswrapper[4958]: I1206 07:08:40.808580 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" exitCode=0 Dec 06 07:08:40 crc kubenswrapper[4958]: I1206 07:08:40.808624 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61"} Dec 06 07:08:40 crc kubenswrapper[4958]: I1206 07:08:40.809187 4958 scope.go:117] "RemoveContainer" containerID="acd5b1fb7f44573113189c9bbdead11e5e6a1e5b56ccd5e76cbb98ba0d7802df" Dec 06 07:08:40 crc kubenswrapper[4958]: I1206 07:08:40.810202 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:08:40 crc kubenswrapper[4958]: E1206 07:08:40.810698 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:08:51 crc kubenswrapper[4958]: I1206 07:08:51.762888 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:08:51 crc kubenswrapper[4958]: E1206 07:08:51.764136 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:09:03 crc kubenswrapper[4958]: I1206 07:09:03.762872 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:09:03 crc kubenswrapper[4958]: E1206 07:09:03.763673 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:09:17 crc kubenswrapper[4958]: I1206 07:09:17.762977 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:09:17 crc kubenswrapper[4958]: E1206 07:09:17.764178 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:09:31 crc kubenswrapper[4958]: I1206 07:09:31.762895 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:09:31 crc kubenswrapper[4958]: E1206 07:09:31.763569 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.466003 4958 scope.go:117] "RemoveContainer" containerID="197c90920ea883fe991fb95804618c2c43109c6400a169378551552b0c310f32" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.493453 4958 scope.go:117] "RemoveContainer" containerID="864cfa02678807291811588871f44beb60dca077e6815d374b6a5a2a4df4d79b" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.509886 4958 scope.go:117] "RemoveContainer" containerID="cae713a142456b275072d9b6a58ef6fbd47e6e928261819d858ed9253004e21e" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.536302 4958 scope.go:117] "RemoveContainer" containerID="b40c4a2f247f97c2b8464cf131f3010053f46a4099352a4152e8de5179260353" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.580245 4958 scope.go:117] "RemoveContainer" containerID="4d2a0e4add1363de1fe88528b1b9f989c0c5091aaba3f0aaef596e597032eabe" Dec 06 07:09:33 crc kubenswrapper[4958]: I1206 07:09:33.630996 4958 scope.go:117] "RemoveContainer" containerID="cca04cfda9bf739d2b3b945114b48839af50d70f53a42972b463fa86dd431df8" Dec 06 07:09:45 crc kubenswrapper[4958]: I1206 07:09:45.762761 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:09:45 crc kubenswrapper[4958]: E1206 07:09:45.763598 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:09:58 crc kubenswrapper[4958]: I1206 07:09:58.762818 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:09:58 crc kubenswrapper[4958]: E1206 07:09:58.764854 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:10:12 crc kubenswrapper[4958]: I1206 07:10:12.762503 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:10:12 crc kubenswrapper[4958]: E1206 07:10:12.765043 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:10:25 crc kubenswrapper[4958]: I1206 07:10:25.762164 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:10:25 crc kubenswrapper[4958]: E1206 07:10:25.763995 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:10:39 crc kubenswrapper[4958]: I1206 07:10:39.768531 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:10:39 crc kubenswrapper[4958]: E1206 07:10:39.769446 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:10:53 crc kubenswrapper[4958]: I1206 07:10:53.762095 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:10:53 crc kubenswrapper[4958]: E1206 07:10:53.762913 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:11:05 crc kubenswrapper[4958]: I1206 07:11:05.762427 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:11:05 crc kubenswrapper[4958]: E1206 07:11:05.763620 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:11:19 crc kubenswrapper[4958]: I1206 07:11:19.768195 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:11:19 crc kubenswrapper[4958]: E1206 07:11:19.769106 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:11:30 crc kubenswrapper[4958]: I1206 07:11:30.763101 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:11:30 crc kubenswrapper[4958]: E1206 07:11:30.764609 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:11:44 crc kubenswrapper[4958]: I1206 07:11:44.761921 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:11:44 crc kubenswrapper[4958]: E1206 07:11:44.762864 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:11:58 crc kubenswrapper[4958]: I1206 07:11:58.762247 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:11:58 crc kubenswrapper[4958]: E1206 07:11:58.763233 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:12:11 crc kubenswrapper[4958]: I1206 07:12:11.762705 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:12:11 crc kubenswrapper[4958]: E1206 07:12:11.763795 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:12:22 crc kubenswrapper[4958]: I1206 07:12:22.732586 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="428c09d2-3c2a-4562-9295-3cf3da179f40" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Dec 06 07:12:24 crc kubenswrapper[4958]: I1206 07:12:24.762755 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:12:24 crc kubenswrapper[4958]: E1206 07:12:24.763543 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:12:35 crc kubenswrapper[4958]: I1206 07:12:35.764317 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:12:35 crc kubenswrapper[4958]: E1206 07:12:35.765355 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.421295 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:43 crc kubenswrapper[4958]: E1206 07:12:43.422467 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="extract-content" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.422506 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="extract-content" Dec 06 07:12:43 crc kubenswrapper[4958]: E1206 07:12:43.422533 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="extract-utilities" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.422541 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="extract-utilities" Dec 06 07:12:43 crc kubenswrapper[4958]: E1206 07:12:43.422555 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="registry-server" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.422562 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="registry-server" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.422920 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f148666-ba64-444a-bcbc-4d791e32bc84" containerName="registry-server" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.424702 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.446438 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.568605 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.569091 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.569224 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nnj4r\" (UniqueName: \"kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.671757 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.671816 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nnj4r\" (UniqueName: \"kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.671875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.672289 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.672350 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.700011 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nnj4r\" (UniqueName: \"kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r\") pod \"redhat-operators-v8xf6\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:43 crc kubenswrapper[4958]: I1206 07:12:43.752599 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:44 crc kubenswrapper[4958]: I1206 07:12:44.245188 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:44 crc kubenswrapper[4958]: I1206 07:12:44.335832 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerStarted","Data":"43722bdd02d02db873af0aabfcf6e8267445e5dfdc7e5eddbfd6faa19948d8d1"} Dec 06 07:12:45 crc kubenswrapper[4958]: I1206 07:12:45.346150 4958 generic.go:334] "Generic (PLEG): container finished" podID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerID="f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded" exitCode=0 Dec 06 07:12:45 crc kubenswrapper[4958]: I1206 07:12:45.346253 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerDied","Data":"f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded"} Dec 06 07:12:45 crc kubenswrapper[4958]: I1206 07:12:45.349308 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:12:46 crc kubenswrapper[4958]: I1206 07:12:46.358333 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerStarted","Data":"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d"} Dec 06 07:12:47 crc kubenswrapper[4958]: I1206 07:12:47.373437 4958 generic.go:334] "Generic (PLEG): container finished" podID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerID="3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d" exitCode=0 Dec 06 07:12:47 crc kubenswrapper[4958]: I1206 07:12:47.374317 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerDied","Data":"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d"} Dec 06 07:12:48 crc kubenswrapper[4958]: I1206 07:12:48.385604 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerStarted","Data":"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde"} Dec 06 07:12:48 crc kubenswrapper[4958]: I1206 07:12:48.404936 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v8xf6" podStartSLOduration=2.976591315 podStartE2EDuration="5.404919788s" podCreationTimestamp="2025-12-06 07:12:43 +0000 UTC" firstStartedPulling="2025-12-06 07:12:45.349054942 +0000 UTC m=+6275.882825705" lastFinishedPulling="2025-12-06 07:12:47.777383415 +0000 UTC m=+6278.311154178" observedRunningTime="2025-12-06 07:12:48.402352588 +0000 UTC m=+6278.936123361" watchObservedRunningTime="2025-12-06 07:12:48.404919788 +0000 UTC m=+6278.938690551" Dec 06 07:12:50 crc kubenswrapper[4958]: I1206 07:12:50.762767 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:12:50 crc kubenswrapper[4958]: E1206 07:12:50.763081 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:12:53 crc kubenswrapper[4958]: I1206 07:12:53.752916 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:53 crc kubenswrapper[4958]: I1206 07:12:53.753573 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:53 crc kubenswrapper[4958]: I1206 07:12:53.816633 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:54 crc kubenswrapper[4958]: I1206 07:12:54.512991 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:54 crc kubenswrapper[4958]: I1206 07:12:54.579322 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:56 crc kubenswrapper[4958]: I1206 07:12:56.456509 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v8xf6" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="registry-server" containerID="cri-o://94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde" gracePeriod=2 Dec 06 07:12:56 crc kubenswrapper[4958]: I1206 07:12:56.982963 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.055324 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content\") pod \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.055509 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nnj4r\" (UniqueName: \"kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r\") pod \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.056658 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities\") pod \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\" (UID: \"6ec9049e-b1d4-48f1-9573-4a57d3a983a9\") " Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.057358 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities" (OuterVolumeSpecName: "utilities") pod "6ec9049e-b1d4-48f1-9573-4a57d3a983a9" (UID: "6ec9049e-b1d4-48f1-9573-4a57d3a983a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.061284 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r" (OuterVolumeSpecName: "kube-api-access-nnj4r") pod "6ec9049e-b1d4-48f1-9573-4a57d3a983a9" (UID: "6ec9049e-b1d4-48f1-9573-4a57d3a983a9"). InnerVolumeSpecName "kube-api-access-nnj4r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.159511 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nnj4r\" (UniqueName: \"kubernetes.io/projected/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-kube-api-access-nnj4r\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.159547 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.468504 4958 generic.go:334] "Generic (PLEG): container finished" podID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerID="94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde" exitCode=0 Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.468558 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerDied","Data":"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde"} Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.468569 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v8xf6" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.468591 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v8xf6" event={"ID":"6ec9049e-b1d4-48f1-9573-4a57d3a983a9","Type":"ContainerDied","Data":"43722bdd02d02db873af0aabfcf6e8267445e5dfdc7e5eddbfd6faa19948d8d1"} Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.468615 4958 scope.go:117] "RemoveContainer" containerID="94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.489827 4958 scope.go:117] "RemoveContainer" containerID="3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.513606 4958 scope.go:117] "RemoveContainer" containerID="f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.564770 4958 scope.go:117] "RemoveContainer" containerID="94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde" Dec 06 07:12:57 crc kubenswrapper[4958]: E1206 07:12:57.565394 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde\": container with ID starting with 94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde not found: ID does not exist" containerID="94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.565425 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde"} err="failed to get container status \"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde\": rpc error: code = NotFound desc = could not find container \"94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde\": container with ID starting with 94f59ddc2af99f386f60757419ff3f0c9a54433d1ab826e3166fe3b98a26ddde not found: ID does not exist" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.565445 4958 scope.go:117] "RemoveContainer" containerID="3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d" Dec 06 07:12:57 crc kubenswrapper[4958]: E1206 07:12:57.565830 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d\": container with ID starting with 3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d not found: ID does not exist" containerID="3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.565858 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d"} err="failed to get container status \"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d\": rpc error: code = NotFound desc = could not find container \"3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d\": container with ID starting with 3e23e6963677bb8cb65dcad3830ce7417a2bc8ed8cdda27903b9a3223ed27f9d not found: ID does not exist" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.565877 4958 scope.go:117] "RemoveContainer" containerID="f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded" Dec 06 07:12:57 crc kubenswrapper[4958]: E1206 07:12:57.566200 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded\": container with ID starting with f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded not found: ID does not exist" containerID="f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded" Dec 06 07:12:57 crc kubenswrapper[4958]: I1206 07:12:57.566225 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded"} err="failed to get container status \"f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded\": rpc error: code = NotFound desc = could not find container \"f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded\": container with ID starting with f6fc09564aa3ce1c52b3fc46c081b36faf599922236677c968291b84edac2ded not found: ID does not exist" Dec 06 07:12:58 crc kubenswrapper[4958]: I1206 07:12:58.997852 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6ec9049e-b1d4-48f1-9573-4a57d3a983a9" (UID: "6ec9049e-b1d4-48f1-9573-4a57d3a983a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:12:59 crc kubenswrapper[4958]: I1206 07:12:59.094119 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6ec9049e-b1d4-48f1-9573-4a57d3a983a9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:12:59 crc kubenswrapper[4958]: I1206 07:12:59.304707 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:59 crc kubenswrapper[4958]: I1206 07:12:59.315437 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v8xf6"] Dec 06 07:12:59 crc kubenswrapper[4958]: I1206 07:12:59.775274 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" path="/var/lib/kubelet/pods/6ec9049e-b1d4-48f1-9573-4a57d3a983a9/volumes" Dec 06 07:13:03 crc kubenswrapper[4958]: I1206 07:13:03.762106 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:13:03 crc kubenswrapper[4958]: E1206 07:13:03.762919 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.669272 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:05 crc kubenswrapper[4958]: E1206 07:13:05.669968 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="registry-server" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.669980 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="registry-server" Dec 06 07:13:05 crc kubenswrapper[4958]: E1206 07:13:05.669991 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="extract-content" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.669997 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="extract-content" Dec 06 07:13:05 crc kubenswrapper[4958]: E1206 07:13:05.670023 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="extract-utilities" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.670031 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="extract-utilities" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.670215 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec9049e-b1d4-48f1-9573-4a57d3a983a9" containerName="registry-server" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.671731 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.694362 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.734870 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmqqf\" (UniqueName: \"kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.734922 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.734981 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.838203 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.838668 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zmqqf\" (UniqueName: \"kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.838739 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.838740 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.839269 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.862339 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zmqqf\" (UniqueName: \"kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf\") pod \"community-operators-fs6r9\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:05 crc kubenswrapper[4958]: I1206 07:13:05.994590 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:06 crc kubenswrapper[4958]: I1206 07:13:06.551759 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:06 crc kubenswrapper[4958]: I1206 07:13:06.587539 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerStarted","Data":"df1cad69862629b4768fb637906cfcf5a26f5d19d7453f30d85023ae629a1345"} Dec 06 07:13:07 crc kubenswrapper[4958]: I1206 07:13:07.600491 4958 generic.go:334] "Generic (PLEG): container finished" podID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerID="e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810" exitCode=0 Dec 06 07:13:07 crc kubenswrapper[4958]: I1206 07:13:07.600550 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerDied","Data":"e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810"} Dec 06 07:13:08 crc kubenswrapper[4958]: I1206 07:13:08.610859 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerStarted","Data":"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c"} Dec 06 07:13:09 crc kubenswrapper[4958]: I1206 07:13:09.634289 4958 generic.go:334] "Generic (PLEG): container finished" podID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerID="a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c" exitCode=0 Dec 06 07:13:09 crc kubenswrapper[4958]: I1206 07:13:09.634402 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerDied","Data":"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c"} Dec 06 07:13:12 crc kubenswrapper[4958]: I1206 07:13:12.659796 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerStarted","Data":"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3"} Dec 06 07:13:12 crc kubenswrapper[4958]: I1206 07:13:12.687741 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fs6r9" podStartSLOduration=4.14363925 podStartE2EDuration="7.68772379s" podCreationTimestamp="2025-12-06 07:13:05 +0000 UTC" firstStartedPulling="2025-12-06 07:13:07.60244647 +0000 UTC m=+6298.136217233" lastFinishedPulling="2025-12-06 07:13:11.14653101 +0000 UTC m=+6301.680301773" observedRunningTime="2025-12-06 07:13:12.680103655 +0000 UTC m=+6303.213874418" watchObservedRunningTime="2025-12-06 07:13:12.68772379 +0000 UTC m=+6303.221494553" Dec 06 07:13:15 crc kubenswrapper[4958]: I1206 07:13:15.995637 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:15 crc kubenswrapper[4958]: I1206 07:13:15.996303 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:16 crc kubenswrapper[4958]: I1206 07:13:16.057545 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:16 crc kubenswrapper[4958]: I1206 07:13:16.768736 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:16 crc kubenswrapper[4958]: I1206 07:13:16.815937 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:17 crc kubenswrapper[4958]: I1206 07:13:17.762905 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:13:17 crc kubenswrapper[4958]: E1206 07:13:17.763442 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:13:18 crc kubenswrapper[4958]: I1206 07:13:18.712433 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fs6r9" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="registry-server" containerID="cri-o://0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3" gracePeriod=2 Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.556832 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.725343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content\") pod \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.725459 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities\") pod \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.725511 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zmqqf\" (UniqueName: \"kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf\") pod \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\" (UID: \"fc798cfd-4cbf-4f0e-affe-df78e02b04c6\") " Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.727555 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities" (OuterVolumeSpecName: "utilities") pod "fc798cfd-4cbf-4f0e-affe-df78e02b04c6" (UID: "fc798cfd-4cbf-4f0e-affe-df78e02b04c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.745680 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf" (OuterVolumeSpecName: "kube-api-access-zmqqf") pod "fc798cfd-4cbf-4f0e-affe-df78e02b04c6" (UID: "fc798cfd-4cbf-4f0e-affe-df78e02b04c6"). InnerVolumeSpecName "kube-api-access-zmqqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.746321 4958 generic.go:334] "Generic (PLEG): container finished" podID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerID="0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3" exitCode=0 Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.746363 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerDied","Data":"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3"} Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.746417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fs6r9" event={"ID":"fc798cfd-4cbf-4f0e-affe-df78e02b04c6","Type":"ContainerDied","Data":"df1cad69862629b4768fb637906cfcf5a26f5d19d7453f30d85023ae629a1345"} Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.746438 4958 scope.go:117] "RemoveContainer" containerID="0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.746930 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fs6r9" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.794028 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc798cfd-4cbf-4f0e-affe-df78e02b04c6" (UID: "fc798cfd-4cbf-4f0e-affe-df78e02b04c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.809743 4958 scope.go:117] "RemoveContainer" containerID="a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.829229 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.829271 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.829282 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zmqqf\" (UniqueName: \"kubernetes.io/projected/fc798cfd-4cbf-4f0e-affe-df78e02b04c6-kube-api-access-zmqqf\") on node \"crc\" DevicePath \"\"" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.841424 4958 scope.go:117] "RemoveContainer" containerID="e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.883196 4958 scope.go:117] "RemoveContainer" containerID="0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3" Dec 06 07:13:19 crc kubenswrapper[4958]: E1206 07:13:19.884128 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3\": container with ID starting with 0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3 not found: ID does not exist" containerID="0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.884157 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3"} err="failed to get container status \"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3\": rpc error: code = NotFound desc = could not find container \"0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3\": container with ID starting with 0b7b77f54b509a617220c019d8d07896401b747c497364a47606bb282c807ca3 not found: ID does not exist" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.884182 4958 scope.go:117] "RemoveContainer" containerID="a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c" Dec 06 07:13:19 crc kubenswrapper[4958]: E1206 07:13:19.887986 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c\": container with ID starting with a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c not found: ID does not exist" containerID="a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.888025 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c"} err="failed to get container status \"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c\": rpc error: code = NotFound desc = could not find container \"a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c\": container with ID starting with a0678d380b94d224704c12e9b27dcf1faab5079fb33c64046b3afdf40795370c not found: ID does not exist" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.888047 4958 scope.go:117] "RemoveContainer" containerID="e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810" Dec 06 07:13:19 crc kubenswrapper[4958]: E1206 07:13:19.889563 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810\": container with ID starting with e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810 not found: ID does not exist" containerID="e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810" Dec 06 07:13:19 crc kubenswrapper[4958]: I1206 07:13:19.889599 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810"} err="failed to get container status \"e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810\": rpc error: code = NotFound desc = could not find container \"e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810\": container with ID starting with e9b473071d449d06b06717e004a533d3b254845dbaa08e9e619ab9966e8c9810 not found: ID does not exist" Dec 06 07:13:20 crc kubenswrapper[4958]: I1206 07:13:20.081582 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:20 crc kubenswrapper[4958]: I1206 07:13:20.089278 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fs6r9"] Dec 06 07:13:21 crc kubenswrapper[4958]: I1206 07:13:21.774846 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" path="/var/lib/kubelet/pods/fc798cfd-4cbf-4f0e-affe-df78e02b04c6/volumes" Dec 06 07:13:30 crc kubenswrapper[4958]: I1206 07:13:30.762377 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:13:30 crc kubenswrapper[4958]: E1206 07:13:30.763169 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:13:45 crc kubenswrapper[4958]: I1206 07:13:45.761718 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:13:46 crc kubenswrapper[4958]: I1206 07:13:46.029104 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464"} Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.150160 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl"] Dec 06 07:15:00 crc kubenswrapper[4958]: E1206 07:15:00.151162 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.151179 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4958]: E1206 07:15:00.151195 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.151204 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="extract-utilities" Dec 06 07:15:00 crc kubenswrapper[4958]: E1206 07:15:00.151229 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.151237 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="extract-content" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.151525 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc798cfd-4cbf-4f0e-affe-df78e02b04c6" containerName="registry-server" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.152394 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.154795 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.155012 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.183255 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl"] Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.238428 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.238508 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.238726 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8bx9\" (UniqueName: \"kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.341072 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.341123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.341147 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8bx9\" (UniqueName: \"kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.342236 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.353192 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.360577 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8bx9\" (UniqueName: \"kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9\") pod \"collect-profiles-29416755-cmjxl\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.491953 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:00 crc kubenswrapper[4958]: I1206 07:15:00.980012 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl"] Dec 06 07:15:01 crc kubenswrapper[4958]: I1206 07:15:01.759212 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" event={"ID":"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3","Type":"ContainerStarted","Data":"6624891b4f7ed707c2d038484013eb7803aab628c20988eea5bf90ba2910e67f"} Dec 06 07:15:02 crc kubenswrapper[4958]: I1206 07:15:02.771156 4958 generic.go:334] "Generic (PLEG): container finished" podID="c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" containerID="6643d9d65b16a75af98a47e91b82239cb7bb991e64d1944e21462a25cbcd640c" exitCode=0 Dec 06 07:15:02 crc kubenswrapper[4958]: I1206 07:15:02.771255 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" event={"ID":"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3","Type":"ContainerDied","Data":"6643d9d65b16a75af98a47e91b82239cb7bb991e64d1944e21462a25cbcd640c"} Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.140893 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.234335 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8bx9\" (UniqueName: \"kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9\") pod \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.234465 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume\") pod \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.234812 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume\") pod \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\" (UID: \"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3\") " Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.236066 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume" (OuterVolumeSpecName: "config-volume") pod "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" (UID: "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.240558 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9" (OuterVolumeSpecName: "kube-api-access-z8bx9") pod "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" (UID: "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3"). InnerVolumeSpecName "kube-api-access-z8bx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.241955 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" (UID: "c99afc0e-ecb2-4940-aabf-9b4bb5993ab3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.337628 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.337960 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8bx9\" (UniqueName: \"kubernetes.io/projected/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-kube-api-access-z8bx9\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.338067 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c99afc0e-ecb2-4940-aabf-9b4bb5993ab3-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.788845 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" event={"ID":"c99afc0e-ecb2-4940-aabf-9b4bb5993ab3","Type":"ContainerDied","Data":"6624891b4f7ed707c2d038484013eb7803aab628c20988eea5bf90ba2910e67f"} Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.788888 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6624891b4f7ed707c2d038484013eb7803aab628c20988eea5bf90ba2910e67f" Dec 06 07:15:04 crc kubenswrapper[4958]: I1206 07:15:04.789206 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416755-cmjxl" Dec 06 07:15:05 crc kubenswrapper[4958]: I1206 07:15:05.230991 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz"] Dec 06 07:15:05 crc kubenswrapper[4958]: I1206 07:15:05.239858 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416710-4t8hz"] Dec 06 07:15:05 crc kubenswrapper[4958]: I1206 07:15:05.772701 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26460c44-cd78-4d4c-b9fe-8dace0fba04b" path="/var/lib/kubelet/pods/26460c44-cd78-4d4c-b9fe-8dace0fba04b/volumes" Dec 06 07:15:33 crc kubenswrapper[4958]: I1206 07:15:33.841275 4958 scope.go:117] "RemoveContainer" containerID="72871d25b4f6e686f46373e08de5eb9e382a8d0dcb8e59df4be0c7eb6cb533ab" Dec 06 07:16:04 crc kubenswrapper[4958]: I1206 07:16:04.403092 4958 generic.go:334] "Generic (PLEG): container finished" podID="333ab9e6-feb4-4ebe-8bb6-75987c261085" containerID="467e130fc7ac62261477e86dcc16fce19a8c9c04fbacb57f6afa58fc4906cd97" exitCode=1 Dec 06 07:16:04 crc kubenswrapper[4958]: I1206 07:16:04.403210 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"333ab9e6-feb4-4ebe-8bb6-75987c261085","Type":"ContainerDied","Data":"467e130fc7ac62261477e86dcc16fce19a8c9c04fbacb57f6afa58fc4906cd97"} Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.787006 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863312 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk96n\" (UniqueName: \"kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863454 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863538 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863629 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863664 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863911 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.863969 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.864008 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.864066 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs\") pod \"333ab9e6-feb4-4ebe-8bb6-75987c261085\" (UID: \"333ab9e6-feb4-4ebe-8bb6-75987c261085\") " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.864305 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.864770 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data" (OuterVolumeSpecName: "config-data") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.864799 4958 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.869648 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.883248 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n" (OuterVolumeSpecName: "kube-api-access-sk96n") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "kube-api-access-sk96n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.894714 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "test-operator-logs") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.899040 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.899659 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.901015 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.919808 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "333ab9e6-feb4-4ebe-8bb6-75987c261085" (UID: "333ab9e6-feb4-4ebe-8bb6-75987c261085"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966809 4958 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ca-certs\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966837 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk96n\" (UniqueName: \"kubernetes.io/projected/333ab9e6-feb4-4ebe-8bb6-75987c261085-kube-api-access-sk96n\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966850 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966861 4958 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/333ab9e6-feb4-4ebe-8bb6-75987c261085-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966906 4958 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966920 4958 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-openstack-config\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966932 4958 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/333ab9e6-feb4-4ebe-8bb6-75987c261085-config-data\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.966945 4958 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/333ab9e6-feb4-4ebe-8bb6-75987c261085-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:05 crc kubenswrapper[4958]: I1206 07:16:05.988808 4958 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Dec 06 07:16:06 crc kubenswrapper[4958]: I1206 07:16:06.068410 4958 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Dec 06 07:16:06 crc kubenswrapper[4958]: I1206 07:16:06.427158 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"333ab9e6-feb4-4ebe-8bb6-75987c261085","Type":"ContainerDied","Data":"527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb"} Dec 06 07:16:06 crc kubenswrapper[4958]: I1206 07:16:06.427206 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="527472b02191a7093df6187167cdd2d830464ed514c03432da2814689ccafceb" Dec 06 07:16:06 crc kubenswrapper[4958]: I1206 07:16:06.427225 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Dec 06 07:16:09 crc kubenswrapper[4958]: I1206 07:16:09.866354 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:16:09 crc kubenswrapper[4958]: I1206 07:16:09.866897 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.642588 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 07:16:17 crc kubenswrapper[4958]: E1206 07:16:17.643782 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="333ab9e6-feb4-4ebe-8bb6-75987c261085" containerName="tempest-tests-tempest-tests-runner" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.643804 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="333ab9e6-feb4-4ebe-8bb6-75987c261085" containerName="tempest-tests-tempest-tests-runner" Dec 06 07:16:17 crc kubenswrapper[4958]: E1206 07:16:17.643862 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" containerName="collect-profiles" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.643876 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" containerName="collect-profiles" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.644206 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99afc0e-ecb2-4940-aabf-9b4bb5993ab3" containerName="collect-profiles" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.644298 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="333ab9e6-feb4-4ebe-8bb6-75987c261085" containerName="tempest-tests-tempest-tests-runner" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.645524 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.648912 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-mmddf" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.658061 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.702488 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr7j7\" (UniqueName: \"kubernetes.io/projected/85bd5593-92db-4d12-b5eb-faef7436c97d-kube-api-access-lr7j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.702537 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.804952 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr7j7\" (UniqueName: \"kubernetes.io/projected/85bd5593-92db-4d12-b5eb-faef7436c97d-kube-api-access-lr7j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.805373 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.805974 4958 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.833050 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr7j7\" (UniqueName: \"kubernetes.io/projected/85bd5593-92db-4d12-b5eb-faef7436c97d-kube-api-access-lr7j7\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.843123 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"85bd5593-92db-4d12-b5eb-faef7436c97d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:17 crc kubenswrapper[4958]: I1206 07:16:17.966664 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Dec 06 07:16:18 crc kubenswrapper[4958]: I1206 07:16:18.428042 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Dec 06 07:16:18 crc kubenswrapper[4958]: I1206 07:16:18.557884 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"85bd5593-92db-4d12-b5eb-faef7436c97d","Type":"ContainerStarted","Data":"dd14e0fa6948bc01752f9d627da676865ab7f8bbcff502e1caeb36d22625993f"} Dec 06 07:16:20 crc kubenswrapper[4958]: I1206 07:16:20.577718 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"85bd5593-92db-4d12-b5eb-faef7436c97d","Type":"ContainerStarted","Data":"d97b0a513541d9931ec20b0d8d362fc075936bf8ca86daac2965808aa56087e7"} Dec 06 07:16:20 crc kubenswrapper[4958]: I1206 07:16:20.601124 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.733133827 podStartE2EDuration="3.601097092s" podCreationTimestamp="2025-12-06 07:16:17 +0000 UTC" firstStartedPulling="2025-12-06 07:16:18.437203314 +0000 UTC m=+6488.970974077" lastFinishedPulling="2025-12-06 07:16:19.305166579 +0000 UTC m=+6489.838937342" observedRunningTime="2025-12-06 07:16:20.594765521 +0000 UTC m=+6491.128536384" watchObservedRunningTime="2025-12-06 07:16:20.601097092 +0000 UTC m=+6491.134867895" Dec 06 07:16:39 crc kubenswrapper[4958]: I1206 07:16:39.866016 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:16:39 crc kubenswrapper[4958]: I1206 07:16:39.866736 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.725811 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr6bn/must-gather-z5vxm"] Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.728167 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.730741 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kr6bn"/"openshift-service-ca.crt" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.730790 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kr6bn"/"kube-root-ca.crt" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.730965 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kr6bn"/"default-dockercfg-58zbq" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.748435 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kr6bn/must-gather-z5vxm"] Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.842722 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.842855 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkthk\" (UniqueName: \"kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.944651 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkthk\" (UniqueName: \"kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.944848 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.945382 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:57 crc kubenswrapper[4958]: I1206 07:16:57.965182 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkthk\" (UniqueName: \"kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk\") pod \"must-gather-z5vxm\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:58 crc kubenswrapper[4958]: I1206 07:16:58.060271 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:16:58 crc kubenswrapper[4958]: I1206 07:16:58.604204 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kr6bn/must-gather-z5vxm"] Dec 06 07:16:58 crc kubenswrapper[4958]: I1206 07:16:58.933986 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" event={"ID":"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5","Type":"ContainerStarted","Data":"710587fb00e65eb89778aa23004b87239e26dc7973dab16a94f1d80cbd05f431"} Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.540482 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.546318 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.568787 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.677001 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24bng\" (UniqueName: \"kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.677078 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.677182 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.779123 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24bng\" (UniqueName: \"kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.779166 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.779222 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.779724 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.779754 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.802794 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24bng\" (UniqueName: \"kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng\") pod \"redhat-marketplace-hlf86\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:16:59 crc kubenswrapper[4958]: I1206 07:16:59.890717 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:00 crc kubenswrapper[4958]: I1206 07:17:00.254851 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:17:00 crc kubenswrapper[4958]: I1206 07:17:00.959855 4958 generic.go:334] "Generic (PLEG): container finished" podID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerID="8e70a269d7a2b42efd31ab2c3e1e50fadabebbec920fda256f52349b642c1f1b" exitCode=0 Dec 06 07:17:00 crc kubenswrapper[4958]: I1206 07:17:00.960160 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerDied","Data":"8e70a269d7a2b42efd31ab2c3e1e50fadabebbec920fda256f52349b642c1f1b"} Dec 06 07:17:00 crc kubenswrapper[4958]: I1206 07:17:00.960186 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerStarted","Data":"99ad2ba16cce1e891567e21e1f3f689e8499fb859ae358a67297a02714bc42d6"} Dec 06 07:17:07 crc kubenswrapper[4958]: I1206 07:17:07.033761 4958 generic.go:334] "Generic (PLEG): container finished" podID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerID="8ba8354e03f63fdc2af7371144e6b80643d9242de7a211b5e55fdd8f0bbb4d85" exitCode=0 Dec 06 07:17:07 crc kubenswrapper[4958]: I1206 07:17:07.033926 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerDied","Data":"8ba8354e03f63fdc2af7371144e6b80643d9242de7a211b5e55fdd8f0bbb4d85"} Dec 06 07:17:07 crc kubenswrapper[4958]: I1206 07:17:07.037067 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" event={"ID":"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5","Type":"ContainerStarted","Data":"12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac"} Dec 06 07:17:07 crc kubenswrapper[4958]: I1206 07:17:07.037110 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" event={"ID":"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5","Type":"ContainerStarted","Data":"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439"} Dec 06 07:17:07 crc kubenswrapper[4958]: I1206 07:17:07.071639 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" podStartSLOduration=2.221162003 podStartE2EDuration="10.071587278s" podCreationTimestamp="2025-12-06 07:16:57 +0000 UTC" firstStartedPulling="2025-12-06 07:16:58.569008358 +0000 UTC m=+6529.102779121" lastFinishedPulling="2025-12-06 07:17:06.419433633 +0000 UTC m=+6536.953204396" observedRunningTime="2025-12-06 07:17:07.063035347 +0000 UTC m=+6537.596806110" watchObservedRunningTime="2025-12-06 07:17:07.071587278 +0000 UTC m=+6537.605358041" Dec 06 07:17:08 crc kubenswrapper[4958]: I1206 07:17:08.050941 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerStarted","Data":"a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f"} Dec 06 07:17:08 crc kubenswrapper[4958]: I1206 07:17:08.080319 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hlf86" podStartSLOduration=2.5774903399999998 podStartE2EDuration="9.080298714s" podCreationTimestamp="2025-12-06 07:16:59 +0000 UTC" firstStartedPulling="2025-12-06 07:17:00.962813547 +0000 UTC m=+6531.496584310" lastFinishedPulling="2025-12-06 07:17:07.465621921 +0000 UTC m=+6537.999392684" observedRunningTime="2025-12-06 07:17:08.068698201 +0000 UTC m=+6538.602468974" watchObservedRunningTime="2025-12-06 07:17:08.080298714 +0000 UTC m=+6538.614069477" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.866289 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.866367 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.866414 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.867208 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.867263 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464" gracePeriod=600 Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.893317 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.893373 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:09 crc kubenswrapper[4958]: I1206 07:17:09.949737 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:10 crc kubenswrapper[4958]: I1206 07:17:10.075037 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464" exitCode=0 Dec 06 07:17:10 crc kubenswrapper[4958]: I1206 07:17:10.075143 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464"} Dec 06 07:17:10 crc kubenswrapper[4958]: I1206 07:17:10.075232 4958 scope.go:117] "RemoveContainer" containerID="f965fcfc735532b7de2edc8624e21b2280b5d3a26ef2aa2e9a7b9c25d353ca61" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.102483 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f"} Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.193981 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-ss8wg"] Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.195620 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.257595 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.257658 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4bk7\" (UniqueName: \"kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.359431 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.359526 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4bk7\" (UniqueName: \"kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.359580 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.380611 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4bk7\" (UniqueName: \"kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7\") pod \"crc-debug-ss8wg\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: I1206 07:17:12.511351 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:17:12 crc kubenswrapper[4958]: W1206 07:17:12.552080 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a019f37_9530_45f4_9b9f_d6cbed107bee.slice/crio-f6f144569208753332601424581b296caa548303c07b4acd4db812be3c163087 WatchSource:0}: Error finding container f6f144569208753332601424581b296caa548303c07b4acd4db812be3c163087: Status 404 returned error can't find the container with id f6f144569208753332601424581b296caa548303c07b4acd4db812be3c163087 Dec 06 07:17:13 crc kubenswrapper[4958]: I1206 07:17:13.113609 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" event={"ID":"7a019f37-9530-45f4-9b9f-d6cbed107bee","Type":"ContainerStarted","Data":"f6f144569208753332601424581b296caa548303c07b4acd4db812be3c163087"} Dec 06 07:17:19 crc kubenswrapper[4958]: I1206 07:17:19.948221 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:20 crc kubenswrapper[4958]: I1206 07:17:20.011111 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:17:20 crc kubenswrapper[4958]: I1206 07:17:20.174941 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hlf86" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="registry-server" containerID="cri-o://a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" gracePeriod=2 Dec 06 07:17:21 crc kubenswrapper[4958]: I1206 07:17:21.192024 4958 generic.go:334] "Generic (PLEG): container finished" podID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerID="a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" exitCode=0 Dec 06 07:17:21 crc kubenswrapper[4958]: I1206 07:17:21.192074 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerDied","Data":"a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f"} Dec 06 07:17:29 crc kubenswrapper[4958]: E1206 07:17:29.892503 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f is running failed: container process not found" containerID="a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:17:29 crc kubenswrapper[4958]: E1206 07:17:29.893423 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f is running failed: container process not found" containerID="a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:17:29 crc kubenswrapper[4958]: E1206 07:17:29.893732 4958 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f is running failed: container process not found" containerID="a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" cmd=["grpc_health_probe","-addr=:50051"] Dec 06 07:17:29 crc kubenswrapper[4958]: E1206 07:17:29.893761 4958 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-hlf86" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="registry-server" Dec 06 07:17:30 crc kubenswrapper[4958]: E1206 07:17:30.522916 4958 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Dec 06 07:17:30 crc kubenswrapper[4958]: E1206 07:17:30.523345 4958 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t4bk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-ss8wg_openshift-must-gather-kr6bn(7a019f37-9530-45f4-9b9f-d6cbed107bee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 06 07:17:30 crc kubenswrapper[4958]: E1206 07:17:30.524561 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.819066 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.962281 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24bng\" (UniqueName: \"kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng\") pod \"143cb57a-c3cd-4eed-b883-3e00e0f78061\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.962415 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content\") pod \"143cb57a-c3cd-4eed-b883-3e00e0f78061\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.962556 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities\") pod \"143cb57a-c3cd-4eed-b883-3e00e0f78061\" (UID: \"143cb57a-c3cd-4eed-b883-3e00e0f78061\") " Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.963120 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities" (OuterVolumeSpecName: "utilities") pod "143cb57a-c3cd-4eed-b883-3e00e0f78061" (UID: "143cb57a-c3cd-4eed-b883-3e00e0f78061"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.967989 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng" (OuterVolumeSpecName: "kube-api-access-24bng") pod "143cb57a-c3cd-4eed-b883-3e00e0f78061" (UID: "143cb57a-c3cd-4eed-b883-3e00e0f78061"). InnerVolumeSpecName "kube-api-access-24bng". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:17:30 crc kubenswrapper[4958]: I1206 07:17:30.984756 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "143cb57a-c3cd-4eed-b883-3e00e0f78061" (UID: "143cb57a-c3cd-4eed-b883-3e00e0f78061"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.064900 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24bng\" (UniqueName: \"kubernetes.io/projected/143cb57a-c3cd-4eed-b883-3e00e0f78061-kube-api-access-24bng\") on node \"crc\" DevicePath \"\"" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.064936 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.064945 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/143cb57a-c3cd-4eed-b883-3e00e0f78061-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.283415 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hlf86" event={"ID":"143cb57a-c3cd-4eed-b883-3e00e0f78061","Type":"ContainerDied","Data":"99ad2ba16cce1e891567e21e1f3f689e8499fb859ae358a67297a02714bc42d6"} Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.283436 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hlf86" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.283507 4958 scope.go:117] "RemoveContainer" containerID="a4534ff9e18753d3b2cc3ff316562464970ccdecee433600d9c724251879b04f" Dec 06 07:17:31 crc kubenswrapper[4958]: E1206 07:17:31.287402 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.325279 4958 scope.go:117] "RemoveContainer" containerID="8ba8354e03f63fdc2af7371144e6b80643d9242de7a211b5e55fdd8f0bbb4d85" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.345072 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.355084 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hlf86"] Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.355261 4958 scope.go:117] "RemoveContainer" containerID="8e70a269d7a2b42efd31ab2c3e1e50fadabebbec920fda256f52349b642c1f1b" Dec 06 07:17:31 crc kubenswrapper[4958]: I1206 07:17:31.775055 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" path="/var/lib/kubelet/pods/143cb57a-c3cd-4eed-b883-3e00e0f78061/volumes" Dec 06 07:17:45 crc kubenswrapper[4958]: I1206 07:17:45.764773 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:17:46 crc kubenswrapper[4958]: I1206 07:17:46.419283 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" event={"ID":"7a019f37-9530-45f4-9b9f-d6cbed107bee","Type":"ContainerStarted","Data":"a46f2180a7867ba6a7c3e3584e09fc53cc47f8379d21a3a907e6424604eb9113"} Dec 06 07:17:46 crc kubenswrapper[4958]: I1206 07:17:46.441393 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" podStartSLOduration=0.816958663 podStartE2EDuration="34.441373175s" podCreationTimestamp="2025-12-06 07:17:12 +0000 UTC" firstStartedPulling="2025-12-06 07:17:12.554521695 +0000 UTC m=+6543.088292478" lastFinishedPulling="2025-12-06 07:17:46.178936227 +0000 UTC m=+6576.712706990" observedRunningTime="2025-12-06 07:17:46.434774428 +0000 UTC m=+6576.968545191" watchObservedRunningTime="2025-12-06 07:17:46.441373175 +0000 UTC m=+6576.975143928" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.729888 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:19 crc kubenswrapper[4958]: E1206 07:18:19.731038 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="extract-content" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.731056 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="extract-content" Dec 06 07:18:19 crc kubenswrapper[4958]: E1206 07:18:19.731075 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="registry-server" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.731082 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="registry-server" Dec 06 07:18:19 crc kubenswrapper[4958]: E1206 07:18:19.731119 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="extract-utilities" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.731128 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="extract-utilities" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.731389 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="143cb57a-c3cd-4eed-b883-3e00e0f78061" containerName="registry-server" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.733279 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.751788 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.847285 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.847375 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.847592 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgg7\" (UniqueName: \"kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.949875 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jgg7\" (UniqueName: \"kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.950392 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.950456 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.951015 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.951029 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:19 crc kubenswrapper[4958]: I1206 07:18:19.979876 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jgg7\" (UniqueName: \"kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7\") pod \"certified-operators-gq255\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:20 crc kubenswrapper[4958]: I1206 07:18:20.056879 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:20 crc kubenswrapper[4958]: I1206 07:18:20.629251 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:20 crc kubenswrapper[4958]: I1206 07:18:20.787733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerStarted","Data":"b6383b569d730b5171f14a32dad1127ca83e4f282d4dc5c4a5f80682076ec1d0"} Dec 06 07:18:21 crc kubenswrapper[4958]: I1206 07:18:21.799037 4958 generic.go:334] "Generic (PLEG): container finished" podID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerID="e8add7f20f2b0613f5f6f387296ecf89b405178edb620c20ddcabc23f101c6a7" exitCode=0 Dec 06 07:18:21 crc kubenswrapper[4958]: I1206 07:18:21.799151 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerDied","Data":"e8add7f20f2b0613f5f6f387296ecf89b405178edb620c20ddcabc23f101c6a7"} Dec 06 07:18:22 crc kubenswrapper[4958]: I1206 07:18:22.845719 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerStarted","Data":"0a3a99f467b84525b21be40c8614254a06fb30a5af55f98b6f7f019589dc91de"} Dec 06 07:18:23 crc kubenswrapper[4958]: I1206 07:18:23.860547 4958 generic.go:334] "Generic (PLEG): container finished" podID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerID="0a3a99f467b84525b21be40c8614254a06fb30a5af55f98b6f7f019589dc91de" exitCode=0 Dec 06 07:18:23 crc kubenswrapper[4958]: I1206 07:18:23.860855 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerDied","Data":"0a3a99f467b84525b21be40c8614254a06fb30a5af55f98b6f7f019589dc91de"} Dec 06 07:18:24 crc kubenswrapper[4958]: I1206 07:18:24.878610 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerStarted","Data":"a17e4fe87bdd58c31536d91d690c4748dde85e857c8906c1b25654aa2deaf6b6"} Dec 06 07:18:24 crc kubenswrapper[4958]: I1206 07:18:24.906456 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gq255" podStartSLOduration=3.369496476 podStartE2EDuration="5.906439851s" podCreationTimestamp="2025-12-06 07:18:19 +0000 UTC" firstStartedPulling="2025-12-06 07:18:21.801130274 +0000 UTC m=+6612.334901037" lastFinishedPulling="2025-12-06 07:18:24.338073649 +0000 UTC m=+6614.871844412" observedRunningTime="2025-12-06 07:18:24.903969164 +0000 UTC m=+6615.437739937" watchObservedRunningTime="2025-12-06 07:18:24.906439851 +0000 UTC m=+6615.440210614" Dec 06 07:18:30 crc kubenswrapper[4958]: I1206 07:18:30.058062 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:30 crc kubenswrapper[4958]: I1206 07:18:30.058694 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:30 crc kubenswrapper[4958]: I1206 07:18:30.108097 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:31 crc kubenswrapper[4958]: I1206 07:18:31.044345 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:31 crc kubenswrapper[4958]: I1206 07:18:31.152534 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:32 crc kubenswrapper[4958]: I1206 07:18:32.977429 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gq255" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="registry-server" containerID="cri-o://a17e4fe87bdd58c31536d91d690c4748dde85e857c8906c1b25654aa2deaf6b6" gracePeriod=2 Dec 06 07:18:33 crc kubenswrapper[4958]: I1206 07:18:33.991088 4958 generic.go:334] "Generic (PLEG): container finished" podID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerID="a17e4fe87bdd58c31536d91d690c4748dde85e857c8906c1b25654aa2deaf6b6" exitCode=0 Dec 06 07:18:33 crc kubenswrapper[4958]: I1206 07:18:33.991171 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerDied","Data":"a17e4fe87bdd58c31536d91d690c4748dde85e857c8906c1b25654aa2deaf6b6"} Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.582508 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.657431 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jgg7\" (UniqueName: \"kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7\") pod \"26e1b583-ead7-4975-8c58-50af0cc711ca\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.657590 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content\") pod \"26e1b583-ead7-4975-8c58-50af0cc711ca\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.657794 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities\") pod \"26e1b583-ead7-4975-8c58-50af0cc711ca\" (UID: \"26e1b583-ead7-4975-8c58-50af0cc711ca\") " Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.658726 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities" (OuterVolumeSpecName: "utilities") pod "26e1b583-ead7-4975-8c58-50af0cc711ca" (UID: "26e1b583-ead7-4975-8c58-50af0cc711ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.671554 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7" (OuterVolumeSpecName: "kube-api-access-5jgg7") pod "26e1b583-ead7-4975-8c58-50af0cc711ca" (UID: "26e1b583-ead7-4975-8c58-50af0cc711ca"). InnerVolumeSpecName "kube-api-access-5jgg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.721199 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "26e1b583-ead7-4975-8c58-50af0cc711ca" (UID: "26e1b583-ead7-4975-8c58-50af0cc711ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.760267 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jgg7\" (UniqueName: \"kubernetes.io/projected/26e1b583-ead7-4975-8c58-50af0cc711ca-kube-api-access-5jgg7\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.760568 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:34 crc kubenswrapper[4958]: I1206 07:18:34.760695 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/26e1b583-ead7-4975-8c58-50af0cc711ca-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.002417 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gq255" event={"ID":"26e1b583-ead7-4975-8c58-50af0cc711ca","Type":"ContainerDied","Data":"b6383b569d730b5171f14a32dad1127ca83e4f282d4dc5c4a5f80682076ec1d0"} Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.002485 4958 scope.go:117] "RemoveContainer" containerID="a17e4fe87bdd58c31536d91d690c4748dde85e857c8906c1b25654aa2deaf6b6" Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.002610 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gq255" Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.035013 4958 scope.go:117] "RemoveContainer" containerID="0a3a99f467b84525b21be40c8614254a06fb30a5af55f98b6f7f019589dc91de" Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.058552 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.066600 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gq255"] Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.068762 4958 scope.go:117] "RemoveContainer" containerID="e8add7f20f2b0613f5f6f387296ecf89b405178edb620c20ddcabc23f101c6a7" Dec 06 07:18:35 crc kubenswrapper[4958]: I1206 07:18:35.775807 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" path="/var/lib/kubelet/pods/26e1b583-ead7-4975-8c58-50af0cc711ca/volumes" Dec 06 07:18:43 crc kubenswrapper[4958]: I1206 07:18:43.095695 4958 generic.go:334] "Generic (PLEG): container finished" podID="7a019f37-9530-45f4-9b9f-d6cbed107bee" containerID="a46f2180a7867ba6a7c3e3584e09fc53cc47f8379d21a3a907e6424604eb9113" exitCode=0 Dec 06 07:18:43 crc kubenswrapper[4958]: I1206 07:18:43.096261 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" event={"ID":"7a019f37-9530-45f4-9b9f-d6cbed107bee","Type":"ContainerDied","Data":"a46f2180a7867ba6a7c3e3584e09fc53cc47f8379d21a3a907e6424604eb9113"} Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.217964 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.255316 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-ss8wg"] Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.265606 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-ss8wg"] Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.366906 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4bk7\" (UniqueName: \"kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7\") pod \"7a019f37-9530-45f4-9b9f-d6cbed107bee\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.367004 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host\") pod \"7a019f37-9530-45f4-9b9f-d6cbed107bee\" (UID: \"7a019f37-9530-45f4-9b9f-d6cbed107bee\") " Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.367082 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host" (OuterVolumeSpecName: "host") pod "7a019f37-9530-45f4-9b9f-d6cbed107bee" (UID: "7a019f37-9530-45f4-9b9f-d6cbed107bee"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.367620 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/7a019f37-9530-45f4-9b9f-d6cbed107bee-host\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.372735 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7" (OuterVolumeSpecName: "kube-api-access-t4bk7") pod "7a019f37-9530-45f4-9b9f-d6cbed107bee" (UID: "7a019f37-9530-45f4-9b9f-d6cbed107bee"). InnerVolumeSpecName "kube-api-access-t4bk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:18:44 crc kubenswrapper[4958]: I1206 07:18:44.469997 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4bk7\" (UniqueName: \"kubernetes.io/projected/7a019f37-9530-45f4-9b9f-d6cbed107bee-kube-api-access-t4bk7\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.115587 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6f144569208753332601424581b296caa548303c07b4acd4db812be3c163087" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.115689 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-ss8wg" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.424580 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-pbl4s"] Dec 06 07:18:45 crc kubenswrapper[4958]: E1206 07:18:45.424954 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" containerName="container-00" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.424969 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" containerName="container-00" Dec 06 07:18:45 crc kubenswrapper[4958]: E1206 07:18:45.425007 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="extract-content" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425013 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="extract-content" Dec 06 07:18:45 crc kubenswrapper[4958]: E1206 07:18:45.425028 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="extract-utilities" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425036 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="extract-utilities" Dec 06 07:18:45 crc kubenswrapper[4958]: E1206 07:18:45.425050 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="registry-server" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425055 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="registry-server" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425242 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="26e1b583-ead7-4975-8c58-50af0cc711ca" containerName="registry-server" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425253 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" containerName="container-00" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.425893 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.592195 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlbw\" (UniqueName: \"kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.592600 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.695113 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqlbw\" (UniqueName: \"kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.695897 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.696033 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.713443 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqlbw\" (UniqueName: \"kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw\") pod \"crc-debug-pbl4s\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.756627 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:45 crc kubenswrapper[4958]: I1206 07:18:45.773255 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a019f37-9530-45f4-9b9f-d6cbed107bee" path="/var/lib/kubelet/pods/7a019f37-9530-45f4-9b9f-d6cbed107bee/volumes" Dec 06 07:18:46 crc kubenswrapper[4958]: I1206 07:18:46.126294 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" event={"ID":"c4023269-e3fc-4a19-ba72-8b688583d282","Type":"ContainerStarted","Data":"8e9e34a259435d40df73e3f98fec2366a78edbfea06a72f823a3614e4a4cac93"} Dec 06 07:18:46 crc kubenswrapper[4958]: I1206 07:18:46.126638 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" event={"ID":"c4023269-e3fc-4a19-ba72-8b688583d282","Type":"ContainerStarted","Data":"be1db10bb8cf4af688f4ce0a004d2c72767fd293f919aa516411cdc17078ce7c"} Dec 06 07:18:46 crc kubenswrapper[4958]: I1206 07:18:46.143738 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" podStartSLOduration=1.143716191 podStartE2EDuration="1.143716191s" podCreationTimestamp="2025-12-06 07:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-06 07:18:46.137832441 +0000 UTC m=+6636.671603224" watchObservedRunningTime="2025-12-06 07:18:46.143716191 +0000 UTC m=+6636.677486954" Dec 06 07:18:47 crc kubenswrapper[4958]: I1206 07:18:47.158248 4958 generic.go:334] "Generic (PLEG): container finished" podID="c4023269-e3fc-4a19-ba72-8b688583d282" containerID="8e9e34a259435d40df73e3f98fec2366a78edbfea06a72f823a3614e4a4cac93" exitCode=0 Dec 06 07:18:47 crc kubenswrapper[4958]: I1206 07:18:47.158295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" event={"ID":"c4023269-e3fc-4a19-ba72-8b688583d282","Type":"ContainerDied","Data":"8e9e34a259435d40df73e3f98fec2366a78edbfea06a72f823a3614e4a4cac93"} Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.291242 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.439514 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqlbw\" (UniqueName: \"kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw\") pod \"c4023269-e3fc-4a19-ba72-8b688583d282\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.439980 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host\") pod \"c4023269-e3fc-4a19-ba72-8b688583d282\" (UID: \"c4023269-e3fc-4a19-ba72-8b688583d282\") " Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.440600 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host" (OuterVolumeSpecName: "host") pod "c4023269-e3fc-4a19-ba72-8b688583d282" (UID: "c4023269-e3fc-4a19-ba72-8b688583d282"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.446758 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw" (OuterVolumeSpecName: "kube-api-access-zqlbw") pod "c4023269-e3fc-4a19-ba72-8b688583d282" (UID: "c4023269-e3fc-4a19-ba72-8b688583d282"). InnerVolumeSpecName "kube-api-access-zqlbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.542411 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqlbw\" (UniqueName: \"kubernetes.io/projected/c4023269-e3fc-4a19-ba72-8b688583d282-kube-api-access-zqlbw\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.542440 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c4023269-e3fc-4a19-ba72-8b688583d282-host\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.846763 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-pbl4s"] Dec 06 07:18:48 crc kubenswrapper[4958]: I1206 07:18:48.856312 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-pbl4s"] Dec 06 07:18:49 crc kubenswrapper[4958]: I1206 07:18:49.181043 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be1db10bb8cf4af688f4ce0a004d2c72767fd293f919aa516411cdc17078ce7c" Dec 06 07:18:49 crc kubenswrapper[4958]: I1206 07:18:49.181126 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-pbl4s" Dec 06 07:18:49 crc kubenswrapper[4958]: I1206 07:18:49.773503 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4023269-e3fc-4a19-ba72-8b688583d282" path="/var/lib/kubelet/pods/c4023269-e3fc-4a19-ba72-8b688583d282/volumes" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.044083 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-qmsjx"] Dec 06 07:18:50 crc kubenswrapper[4958]: E1206 07:18:50.044965 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4023269-e3fc-4a19-ba72-8b688583d282" containerName="container-00" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.045078 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4023269-e3fc-4a19-ba72-8b688583d282" containerName="container-00" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.045464 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4023269-e3fc-4a19-ba72-8b688583d282" containerName="container-00" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.046392 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.175348 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.175549 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswds\" (UniqueName: \"kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.276985 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tswds\" (UniqueName: \"kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.277193 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.277348 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.302162 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tswds\" (UniqueName: \"kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds\") pod \"crc-debug-qmsjx\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: I1206 07:18:50.371368 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:50 crc kubenswrapper[4958]: W1206 07:18:50.402721 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc62934b_9b31_44b1_b9fb_d6b9be412ec9.slice/crio-806830657ffc33851449c473b7bc7077353355865be6d1cf8b378291ce9260aa WatchSource:0}: Error finding container 806830657ffc33851449c473b7bc7077353355865be6d1cf8b378291ce9260aa: Status 404 returned error can't find the container with id 806830657ffc33851449c473b7bc7077353355865be6d1cf8b378291ce9260aa Dec 06 07:18:51 crc kubenswrapper[4958]: I1206 07:18:51.204688 4958 generic.go:334] "Generic (PLEG): container finished" podID="fc62934b-9b31-44b1-b9fb-d6b9be412ec9" containerID="d61b6af24a082e2e595d73493bf12d45b09416e518b91e5b53bf84c6e4279412" exitCode=0 Dec 06 07:18:51 crc kubenswrapper[4958]: I1206 07:18:51.204745 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" event={"ID":"fc62934b-9b31-44b1-b9fb-d6b9be412ec9","Type":"ContainerDied","Data":"d61b6af24a082e2e595d73493bf12d45b09416e518b91e5b53bf84c6e4279412"} Dec 06 07:18:51 crc kubenswrapper[4958]: I1206 07:18:51.204784 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" event={"ID":"fc62934b-9b31-44b1-b9fb-d6b9be412ec9","Type":"ContainerStarted","Data":"806830657ffc33851449c473b7bc7077353355865be6d1cf8b378291ce9260aa"} Dec 06 07:18:51 crc kubenswrapper[4958]: I1206 07:18:51.254765 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-qmsjx"] Dec 06 07:18:51 crc kubenswrapper[4958]: I1206 07:18:51.265752 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr6bn/crc-debug-qmsjx"] Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.348012 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.454191 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tswds\" (UniqueName: \"kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds\") pod \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.454329 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host\") pod \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\" (UID: \"fc62934b-9b31-44b1-b9fb-d6b9be412ec9\") " Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.454673 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host" (OuterVolumeSpecName: "host") pod "fc62934b-9b31-44b1-b9fb-d6b9be412ec9" (UID: "fc62934b-9b31-44b1-b9fb-d6b9be412ec9"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.455551 4958 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-host\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.465431 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds" (OuterVolumeSpecName: "kube-api-access-tswds") pod "fc62934b-9b31-44b1-b9fb-d6b9be412ec9" (UID: "fc62934b-9b31-44b1-b9fb-d6b9be412ec9"). InnerVolumeSpecName "kube-api-access-tswds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:18:52 crc kubenswrapper[4958]: I1206 07:18:52.557215 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tswds\" (UniqueName: \"kubernetes.io/projected/fc62934b-9b31-44b1-b9fb-d6b9be412ec9-kube-api-access-tswds\") on node \"crc\" DevicePath \"\"" Dec 06 07:18:53 crc kubenswrapper[4958]: I1206 07:18:53.241192 4958 scope.go:117] "RemoveContainer" containerID="d61b6af24a082e2e595d73493bf12d45b09416e518b91e5b53bf84c6e4279412" Dec 06 07:18:53 crc kubenswrapper[4958]: I1206 07:18:53.241237 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/crc-debug-qmsjx" Dec 06 07:18:53 crc kubenswrapper[4958]: I1206 07:18:53.773664 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc62934b-9b31-44b1-b9fb-d6b9be412ec9" path="/var/lib/kubelet/pods/fc62934b-9b31-44b1-b9fb-d6b9be412ec9/volumes" Dec 06 07:19:21 crc kubenswrapper[4958]: I1206 07:19:21.620688 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85d57ddd5d-pth8g_b7770f7e-3112-4c47-8631-a19d269c3ffc/barbican-api/0.log" Dec 06 07:19:21 crc kubenswrapper[4958]: I1206 07:19:21.750566 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-85d57ddd5d-pth8g_b7770f7e-3112-4c47-8631-a19d269c3ffc/barbican-api-log/0.log" Dec 06 07:19:21 crc kubenswrapper[4958]: I1206 07:19:21.833839 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575d9fc686-xrcwg_19b32164-d135-4d2b-9f69-bf4f1c986fa5/barbican-keystone-listener/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.007903 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-575d9fc686-xrcwg_19b32164-d135-4d2b-9f69-bf4f1c986fa5/barbican-keystone-listener-log/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.036435 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68ff9f8f57-qzqld_b7df9cb8-058d-4f26-8444-808fd8fd554c/barbican-worker/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.211115 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-68ff9f8f57-qzqld_b7df9cb8-058d-4f26-8444-808fd8fd554c/barbican-worker-log/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.298652 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-q4rn2_2e1b78f3-f2b9-4304-b139-13f156e87cd1/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.477575 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_428c09d2-3c2a-4562-9295-3cf3da179f40/ceilometer-central-agent/1.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.549334 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_428c09d2-3c2a-4562-9295-3cf3da179f40/ceilometer-notification-agent/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.618433 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_428c09d2-3c2a-4562-9295-3cf3da179f40/proxy-httpd/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.618779 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_428c09d2-3c2a-4562-9295-3cf3da179f40/ceilometer-central-agent/0.log" Dec 06 07:19:22 crc kubenswrapper[4958]: I1206 07:19:22.683866 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_428c09d2-3c2a-4562-9295-3cf3da179f40/sg-core/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.026356 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_73c40f99-3a46-43d5-bab4-475cd389ea2c/cinder-scheduler/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.037977 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_820bd80f-831b-4a53-bada-7fb73f7c08ab/cinder-api-log/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.267089 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_73c40f99-3a46-43d5-bab4-475cd389ea2c/probe/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.304156 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_820bd80f-831b-4a53-bada-7fb73f7c08ab/cinder-api/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.317454 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-gg4qv_4bea02da-8c2c-4d14-88cc-7228998df134/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.535163 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-dv9sw_2be5e85c-0c8e-479d-bd13-97d8504f980f/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.592586 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b58f4b85-5jtpq_9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e/init/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.729944 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b58f4b85-5jtpq_9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e/init/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.839213 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-fn4ml_4f84a870-b9e0-49e4-847b-71322d38a901/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:23 crc kubenswrapper[4958]: I1206 07:19:23.907848 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-77b58f4b85-5jtpq_9cbacdd0-bf2e-4ead-bdaf-04cc0a08254e/dnsmasq-dns/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.049039 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cf2cb807-c3e4-475e-a8fe-4ad4134e383e/glance-httpd/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.079898 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_cf2cb807-c3e4-475e-a8fe-4ad4134e383e/glance-log/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.331437 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8c121c3e-8e75-4122-bb0a-077bb6f305e3/glance-httpd/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.381002 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_8c121c3e-8e75-4122-bb0a-077bb6f305e3/glance-log/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.577488 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65548cc856-4tstl_44386224-0241-4c6e-b12f-a1bef3954fe3/horizon/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.772548 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-znbfc_727529de-528b-4f43-b581-5bfdfcdca081/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:24 crc kubenswrapper[4958]: I1206 07:19:24.959831 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w6fh5_0abbc0a3-1277-4973-819c-d474acd69ee3/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:25 crc kubenswrapper[4958]: I1206 07:19:25.271312 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-65548cc856-4tstl_44386224-0241-4c6e-b12f-a1bef3954fe3/horizon-log/0.log" Dec 06 07:19:25 crc kubenswrapper[4958]: I1206 07:19:25.573421 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_8322abc0-31a3-4770-856d-e23e4d428204/kube-state-metrics/0.log" Dec 06 07:19:25 crc kubenswrapper[4958]: I1206 07:19:25.854772 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416681-29sbf_3e1be52c-6724-4ce4-af65-1e3554f51d20/keystone-cron/0.log" Dec 06 07:19:25 crc kubenswrapper[4958]: I1206 07:19:25.855609 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29416741-ndxxt_205be56c-69f7-4c92-844c-e12d74da811b/keystone-cron/0.log" Dec 06 07:19:25 crc kubenswrapper[4958]: I1206 07:19:25.947905 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-m5rrv_5378d94e-8c86-4393-9dc6-dda81d635c12/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:26 crc kubenswrapper[4958]: I1206 07:19:26.411645 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-d7c696d85-2npl6_395f8723-6487-48ac-b83f-d073c550bb99/neutron-httpd/0.log" Dec 06 07:19:26 crc kubenswrapper[4958]: I1206 07:19:26.504656 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cdf79dfd7-5h4gn_170ad5c2-4dbe-4cec-bd99-b6c3655a5d6a/keystone-api/0.log" Dec 06 07:19:26 crc kubenswrapper[4958]: I1206 07:19:26.549505 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-d7c696d85-2npl6_395f8723-6487-48ac-b83f-d073c550bb99/neutron-api/0.log" Dec 06 07:19:26 crc kubenswrapper[4958]: I1206 07:19:26.636161 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-4j4g7_64ab61d3-8a40-4d22-bae0-25f7dd034eda/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:27 crc kubenswrapper[4958]: I1206 07:19:27.144523 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_019f5f33-5a9c-42c6-8379-cfc4745f5be3/nova-cell0-conductor-conductor/0.log" Dec 06 07:19:27 crc kubenswrapper[4958]: I1206 07:19:27.521613 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_47fce56d-0e48-44d4-a30e-14d412fb727f/nova-cell1-conductor-conductor/0.log" Dec 06 07:19:27 crc kubenswrapper[4958]: I1206 07:19:27.766777 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_fb368363-98b2-4e51-a1a0-077dcb35ccb7/nova-cell1-novncproxy-novncproxy/0.log" Dec 06 07:19:27 crc kubenswrapper[4958]: I1206 07:19:27.861685 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f867f367-20bd-467e-b102-512f96506fa3/nova-api-log/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.135826 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-sbjj6_d85c7ce5-d270-4525-83a5-266d104bcc79/nova-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.296159 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_630ea894-bfed-4bda-b5b1-260f314e2f22/nova-metadata-log/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.583074 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_f867f367-20bd-467e-b102-512f96506fa3/nova-api-api/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.746492 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_61ffc4d3-131a-4ef1-a712-936e7c609cbc/nova-scheduler-scheduler/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.840325 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2701d0b-9691-44fb-a540-796260e0f2c1/mysql-bootstrap/0.log" Dec 06 07:19:28 crc kubenswrapper[4958]: I1206 07:19:28.948889 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2701d0b-9691-44fb-a540-796260e0f2c1/mysql-bootstrap/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.000231 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e2701d0b-9691-44fb-a540-796260e0f2c1/galera/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.202031 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0523eb0f-9fe1-49d4-a3b4-6a872317c136/mysql-bootstrap/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.362550 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0523eb0f-9fe1-49d4-a3b4-6a872317c136/mysql-bootstrap/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.412170 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0523eb0f-9fe1-49d4-a3b4-6a872317c136/galera/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.625662 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_5e310020-e259-46c5-8928-f587abdf0577/openstackclient/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.709884 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-2njn2_e197012d-062b-4bac-90c1-63600d220add/openstack-network-exporter/0.log" Dec 06 07:19:29 crc kubenswrapper[4958]: I1206 07:19:29.899732 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-glklh_ad5ddd17-e280-4547-8d9a-afd3764a5f76/ovsdb-server-init/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.050339 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-glklh_ad5ddd17-e280-4547-8d9a-afd3764a5f76/ovsdb-server-init/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.116902 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-glklh_ad5ddd17-e280-4547-8d9a-afd3764a5f76/ovsdb-server/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.309097 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-rsngm_e72d0843-3802-4dbf-b292-8f37386cdeb5/ovn-controller/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.465856 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-glklh_ad5ddd17-e280-4547-8d9a-afd3764a5f76/ovs-vswitchd/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.583809 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-t4kct_7f33cda0-d358-47d3-8f73-fac395b8b627/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.778394 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_16a5cde7-0ad2-4f04-9643-d6ceca21fe3c/openstack-network-exporter/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.819681 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_16a5cde7-0ad2-4f04-9643-d6ceca21fe3c/ovn-northd/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.963312 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_630ea894-bfed-4bda-b5b1-260f314e2f22/nova-metadata-metadata/0.log" Dec 06 07:19:30 crc kubenswrapper[4958]: I1206 07:19:30.997456 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7/openstack-network-exporter/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.155734 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7719819d-5798-4ab7-bee0-cd8b736f92a2/openstack-network-exporter/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.204550 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_7719819d-5798-4ab7-bee0-cd8b736f92a2/ovsdbserver-sb/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.393487 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_7ff9bb68-a6f7-4ed7-b27a-da119f13b8a7/ovsdbserver-nb/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.668763 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754c8966f6-f7t66_796f2f84-aca5-4fb0-8963-fcfd0a851130/placement-api/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.701339 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e53cc117-7134-4aac-ba1f-3a685b98aa2e/init-config-reloader/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.712222 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-754c8966f6-f7t66_796f2f84-aca5-4fb0-8963-fcfd0a851130/placement-log/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.920138 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e53cc117-7134-4aac-ba1f-3a685b98aa2e/init-config-reloader/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.921872 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e53cc117-7134-4aac-ba1f-3a685b98aa2e/config-reloader/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.943273 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e53cc117-7134-4aac-ba1f-3a685b98aa2e/thanos-sidecar/0.log" Dec 06 07:19:31 crc kubenswrapper[4958]: I1206 07:19:31.997996 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_e53cc117-7134-4aac-ba1f-3a685b98aa2e/prometheus/0.log" Dec 06 07:19:32 crc kubenswrapper[4958]: I1206 07:19:32.091423 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_84d93a05-0621-49f6-ba81-ffc7b948ba5c/setup-container/0.log" Dec 06 07:19:32 crc kubenswrapper[4958]: I1206 07:19:32.357450 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_84d93a05-0621-49f6-ba81-ffc7b948ba5c/setup-container/0.log" Dec 06 07:19:32 crc kubenswrapper[4958]: I1206 07:19:32.398324 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_74d63159-9580-4b70-ba89-74d4d9eeb7b8/setup-container/0.log" Dec 06 07:19:32 crc kubenswrapper[4958]: I1206 07:19:32.631005 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_74d63159-9580-4b70-ba89-74d4d9eeb7b8/setup-container/0.log" Dec 06 07:19:32 crc kubenswrapper[4958]: I1206 07:19:32.828253 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f0f5c93-d108-48ad-b3fd-c54d25ce982c/setup-container/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.043866 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f0f5c93-d108-48ad-b3fd-c54d25ce982c/setup-container/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.077738 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-notifications-server-0_74d63159-9580-4b70-ba89-74d4d9eeb7b8/rabbitmq/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.079024 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_84d93a05-0621-49f6-ba81-ffc7b948ba5c/rabbitmq/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.088286 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_9f0f5c93-d108-48ad-b3fd-c54d25ce982c/rabbitmq/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.360249 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-x2wg5_ca7fd004-bb6c-4a8f-b0ac-bf8ab8d95805/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.369440 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-s2txf_bd40c7a7-faba-4269-b1a5-13e691342c9a/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.542804 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-zdhkb_2b5992cb-ba4f-45da-bdfa-6ca4914a032f/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.600901 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-q2z48_21024c56-19d8-4bad-a676-cefec2f196a2/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.791336 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pmd7p_ebf03971-98ab-468e-ae6d-66f13a9ba5cc/ssh-known-hosts-edpm-deployment/0.log" Dec 06 07:19:33 crc kubenswrapper[4958]: I1206 07:19:33.983642 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-588cbd45c9-xblwx_aae69e62-83f7-47d4-aecd-e883ed84a6ac/proxy-server/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.133202 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9g29t_31905f86-2a88-4c2e-bf22-a629710e3f6b/swift-ring-rebalance/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.147589 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-588cbd45c9-xblwx_aae69e62-83f7-47d4-aecd-e883ed84a6ac/proxy-httpd/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.278652 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/account-auditor/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.339704 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/account-reaper/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.463092 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/account-replicator/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.511680 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/account-server/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.528482 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/container-auditor/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.660284 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/container-replicator/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.713594 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/container-server/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.861396 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/object-expirer/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.892620 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/container-updater/0.log" Dec 06 07:19:34 crc kubenswrapper[4958]: I1206 07:19:34.895716 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/object-auditor/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.009645 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/object-replicator/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.099051 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/object-server/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.148543 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/object-updater/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.160295 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/rsync/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.338556 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_d8c3d892-a529-436f-b8f1-3bb2a4ffbed2/swift-recon-cron/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.385913 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-hzf8m_fad70a10-a21d-4f57-b3f6-5e2349243973/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.595779 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_85bd5593-92db-4d12-b5eb-faef7436c97d/test-operator-logs-container/0.log" Dec 06 07:19:35 crc kubenswrapper[4958]: I1206 07:19:35.945546 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-mrdlb_99265c09-ff81-45cd-ae5e-501f1b7bfe69/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 06 07:19:36 crc kubenswrapper[4958]: I1206 07:19:36.866252 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_333ab9e6-feb4-4ebe-8bb6-75987c261085/tempest-tests-tempest-tests-runner/0.log" Dec 06 07:19:37 crc kubenswrapper[4958]: I1206 07:19:37.150361 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-applier-0_55820dd9-6ca0-448b-8dc4-e92ddce617b7/watcher-applier/0.log" Dec 06 07:19:37 crc kubenswrapper[4958]: I1206 07:19:37.888637 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_859ad21b-442c-4e81-991c-fff351e6f635/watcher-api-log/0.log" Dec 06 07:19:39 crc kubenswrapper[4958]: I1206 07:19:39.866004 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:19:39 crc kubenswrapper[4958]: I1206 07:19:39.867203 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:19:40 crc kubenswrapper[4958]: I1206 07:19:40.478389 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-decision-engine-0_f65760d6-0cb4-4d02-8db2-9c989cb42dc2/watcher-decision-engine/0.log" Dec 06 07:19:42 crc kubenswrapper[4958]: I1206 07:19:42.045734 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_watcher-api-0_859ad21b-442c-4e81-991c-fff351e6f635/watcher-api/0.log" Dec 06 07:19:46 crc kubenswrapper[4958]: I1206 07:19:46.654209 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_d1076143-5994-4717-9d22-a56c404bc73b/memcached/0.log" Dec 06 07:20:04 crc kubenswrapper[4958]: I1206 07:20:04.994609 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/util/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.180944 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/pull/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.183303 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/util/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.214782 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/pull/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.401145 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/util/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.445140 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/pull/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.547343 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_917aae072417a6c2fc5ddd97ca05bfedb9fc1cad89a3b1c4d989b78eafxrhgq_ed7e4cbb-11de-4e0d-9e18-ef37fd78d9e5/extract/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.627345 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-n2s2h_1e47b3e1-d11a-4a15-8a92-24fe19661ee7/kube-rbac-proxy/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.720003 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-n2s2h_1e47b3e1-d11a-4a15-8a92-24fe19661ee7/manager/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.778515 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-kfk67_6a934eab-3341-4e53-8317-eca91e0e9710/kube-rbac-proxy/0.log" Dec 06 07:20:05 crc kubenswrapper[4958]: I1206 07:20:05.885876 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-859b6ccc6-kfk67_6a934eab-3341-4e53-8317-eca91e0e9710/manager/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.008584 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-khhwt_a069026c-ab13-4593-9f99-71aa6fca2ecd/manager/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.031684 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-78b4bc895b-khhwt_a069026c-ab13-4593-9f99-71aa6fca2ecd/kube-rbac-proxy/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.091357 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bg69j_ddaab3a6-6e32-480b-8ba8-3852feb6440f/kube-rbac-proxy/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.325312 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v2ktl_17c0b87a-3ce4-434e-bbbc-cf06bd3c2833/kube-rbac-proxy/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.329328 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-77987cd8cd-bg69j_ddaab3a6-6e32-480b-8ba8-3852feb6440f/manager/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.391455 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-v2ktl_17c0b87a-3ce4-434e-bbbc-cf06bd3c2833/manager/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.532794 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gnstm_ebcc81d0-3595-480b-a886-1ec0e5da638d/kube-rbac-proxy/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.664996 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-gnstm_ebcc81d0-3595-480b-a886-1ec0e5da638d/manager/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.723418 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wv9rd_d0b85019-5501-4f67-a136-f7798be67039/kube-rbac-proxy/0.log" Dec 06 07:20:06 crc kubenswrapper[4958]: I1206 07:20:06.954824 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hx8tj_15c5254b-bcb1-45a3-a94b-21995bd4a143/kube-rbac-proxy/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.030374 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-6c548fd776-hx8tj_15c5254b-bcb1-45a3-a94b-21995bd4a143/manager/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.061545 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-57548d458d-wv9rd_d0b85019-5501-4f67-a136-f7798be67039/manager/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.211958 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-sq7fz_137f3e2e-f835-48ca-873c-41fe38a6d7f2/kube-rbac-proxy/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.604145 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-sq7fz_137f3e2e-f835-48ca-873c-41fe38a6d7f2/manager/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.631140 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-6vcd7_6f35dbd3-cf3b-46f8-83cf-911ea6a88679/manager/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.673090 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7c79b5df47-6vcd7_6f35dbd3-cf3b-46f8-83cf-911ea6a88679/kube-rbac-proxy/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.874130 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7745k_1a7e7d3c-d935-469c-8296-658d9b8542dc/kube-rbac-proxy/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.894948 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-56bbcc9d85-7745k_1a7e7d3c-d935-469c-8296-658d9b8542dc/manager/0.log" Dec 06 07:20:07 crc kubenswrapper[4958]: I1206 07:20:07.964619 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-h2s2k_d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d/kube-rbac-proxy/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.202507 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-h2s2k_d42b2cbe-9f6a-4d29-bb05-0588e5e4cf8d/manager/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.230626 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jk9n8_d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c/kube-rbac-proxy/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.462782 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-26667_f3cf2219-d4b6-43cd-8ace-2852c808fe6e/manager/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.494823 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-jk9n8_d050dc99-5d0b-4a4f-974a-1b2bd5fb5a8c/manager/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.771088 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-26667_f3cf2219-d4b6-43cd-8ace-2852c808fe6e/kube-rbac-proxy/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.774883 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f5nnvrk_c6c9cc05-d00f-4f92-bd7e-13737952085b/manager/0.log" Dec 06 07:20:08 crc kubenswrapper[4958]: I1206 07:20:08.805581 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-55c85496f5nnvrk_c6c9cc05-d00f-4f92-bd7e-13737952085b/kube-rbac-proxy/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.226098 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-55b6fb9447-2pt4t_44ba97c3-c965-4c5a-b712-d9654788f04c/operator/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.376984 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-qfcx8_0a035630-f39d-4094-ad86-117dc028950c/registry-server/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.491185 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-dn98s_26415490-b3ca-4822-93b9-f7fb5efcf375/kube-rbac-proxy/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.575229 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-dn98s_26415490-b3ca-4822-93b9-f7fb5efcf375/manager/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.628177 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-654tb_f3792985-93a5-4b81-8ea2-ca63d1f659d8/kube-rbac-proxy/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.855837 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-654tb_f3792985-93a5-4b81-8ea2-ca63d1f659d8/manager/0.log" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.868306 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.868429 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:20:09 crc kubenswrapper[4958]: I1206 07:20:09.925220 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-jsjtj_59dc6399-d2b4-437a-9521-4096ed7e924f/operator/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.102801 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nkrvh_930d98f5-bc89-466b-9876-ee5764f146f4/kube-rbac-proxy/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.171365 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-ptqzr_e6abb542-4bf4-4edf-b150-de3f6200e4de/kube-rbac-proxy/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.180404 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-5f8c65bbfc-nkrvh_930d98f5-bc89-466b-9876-ee5764f146f4/manager/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.452298 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vm7kq_5c330abd-909c-44eb-a7ff-7cb5398fd736/kube-rbac-proxy/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.498558 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-vm7kq_5c330abd-909c-44eb-a7ff-7cb5398fd736/manager/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.659068 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-76cc84c6bb-ptqzr_e6abb542-4bf4-4edf-b150-de3f6200e4de/manager/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.705523 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-vwbhb_621f367e-fe95-4fb5-9ce7-966981c7b13a/kube-rbac-proxy/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.800009 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-769dc69bc-vwbhb_621f367e-fe95-4fb5-9ce7-966981c7b13a/manager/0.log" Dec 06 07:20:10 crc kubenswrapper[4958]: I1206 07:20:10.947629 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-54bdf956c4-dlf7m_7de555d5-964f-4e85-a2b9-5dddf37e097e/manager/0.log" Dec 06 07:20:29 crc kubenswrapper[4958]: I1206 07:20:29.779799 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-c5v8n_81369d65-ae42-43fc-a2e6-dbf61d9a86d7/control-plane-machine-set-operator/0.log" Dec 06 07:20:29 crc kubenswrapper[4958]: I1206 07:20:29.932676 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jjjhf_46dbd477-d07a-4732-a9e3-08e1d49385c3/kube-rbac-proxy/0.log" Dec 06 07:20:30 crc kubenswrapper[4958]: I1206 07:20:30.005840 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-jjjhf_46dbd477-d07a-4732-a9e3-08e1d49385c3/machine-api-operator/0.log" Dec 06 07:20:39 crc kubenswrapper[4958]: I1206 07:20:39.866624 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:20:39 crc kubenswrapper[4958]: I1206 07:20:39.867192 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:20:39 crc kubenswrapper[4958]: I1206 07:20:39.867240 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 07:20:39 crc kubenswrapper[4958]: I1206 07:20:39.868126 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:20:39 crc kubenswrapper[4958]: I1206 07:20:39.868201 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" gracePeriod=600 Dec 06 07:20:40 crc kubenswrapper[4958]: E1206 07:20:40.103870 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:20:40 crc kubenswrapper[4958]: I1206 07:20:40.565434 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" exitCode=0 Dec 06 07:20:40 crc kubenswrapper[4958]: I1206 07:20:40.565525 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f"} Dec 06 07:20:40 crc kubenswrapper[4958]: I1206 07:20:40.565847 4958 scope.go:117] "RemoveContainer" containerID="d591a665b5a9d5fddaa24bd18d00c05f3be886a2f587ef12738fc9c016807464" Dec 06 07:20:40 crc kubenswrapper[4958]: I1206 07:20:40.566516 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:20:40 crc kubenswrapper[4958]: E1206 07:20:40.566927 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:20:42 crc kubenswrapper[4958]: I1206 07:20:42.241348 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-d2clt_101b4063-65a7-47e2-8cda-8ed8bd230ae9/cert-manager-controller/0.log" Dec 06 07:20:42 crc kubenswrapper[4958]: I1206 07:20:42.390079 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-gh5kc_8f4fa77f-6eb8-4d40-bd13-e45e924b22b5/cert-manager-cainjector/0.log" Dec 06 07:20:42 crc kubenswrapper[4958]: I1206 07:20:42.431599 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-2wj2z_466b443c-4255-40b9-9e46-1e7e6b1a526b/cert-manager-webhook/0.log" Dec 06 07:20:52 crc kubenswrapper[4958]: I1206 07:20:52.763967 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:20:52 crc kubenswrapper[4958]: E1206 07:20:52.765131 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.523595 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-zt7zw_132a681d-da4e-406e-897f-e8204a0b3061/nmstate-console-plugin/0.log" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.680444 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-fhd8b_60563153-eb29-48b1-b275-4a426410c3f2/nmstate-handler/0.log" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.731367 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w7z84_eb092a25-46b9-4519-9bfd-a1a75207a121/kube-rbac-proxy/0.log" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.776791 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-w7z84_eb092a25-46b9-4519-9bfd-a1a75207a121/nmstate-metrics/0.log" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.909350 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-pj8z7_ec41d8b3-c4d9-426f-8856-aeab408126a9/nmstate-operator/0.log" Dec 06 07:20:54 crc kubenswrapper[4958]: I1206 07:20:54.996492 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-wm7wx_5a48e60d-cf21-4b2f-bc99-eebce42f8832/nmstate-webhook/0.log" Dec 06 07:21:07 crc kubenswrapper[4958]: I1206 07:21:07.762267 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:21:07 crc kubenswrapper[4958]: E1206 07:21:07.763277 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:21:12 crc kubenswrapper[4958]: I1206 07:21:12.827103 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-55kcs_09419a79-968a-4da5-8ab8-f9abb9508ac5/kube-rbac-proxy/0.log" Dec 06 07:21:12 crc kubenswrapper[4958]: I1206 07:21:12.950230 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-55kcs_09419a79-968a-4da5-8ab8-f9abb9508ac5/controller/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.114038 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-frr-files/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.338589 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-reloader/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.404329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-frr-files/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.419298 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-metrics/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.468361 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-reloader/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.727727 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-metrics/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.756715 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-metrics/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.756853 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-frr-files/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.784198 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-reloader/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.982833 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-metrics/0.log" Dec 06 07:21:13 crc kubenswrapper[4958]: I1206 07:21:13.993428 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/controller/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.016277 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-frr-files/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.052612 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/cp-reloader/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.216993 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/kube-rbac-proxy/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.361781 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/frr-metrics/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.442492 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/kube-rbac-proxy-frr/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.728028 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/reloader/0.log" Dec 06 07:21:14 crc kubenswrapper[4958]: I1206 07:21:14.773011 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-9gmd4_450ae51d-8948-4c22-8336-efd2ed9f71d3/frr-k8s-webhook-server/0.log" Dec 06 07:21:15 crc kubenswrapper[4958]: I1206 07:21:15.039430 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7f7f549dcb-v626m_c9785861-cb8c-4a0b-82b3-a7425c228197/manager/0.log" Dec 06 07:21:15 crc kubenswrapper[4958]: I1206 07:21:15.150996 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7db7d4c645-s4nl5_5191ef31-713e-4f5d-9407-0f1d1f0ed462/webhook-server/0.log" Dec 06 07:21:15 crc kubenswrapper[4958]: I1206 07:21:15.396076 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4ksm2_b3e2fd62-29f5-4627-91c0-581a8645568b/kube-rbac-proxy/0.log" Dec 06 07:21:15 crc kubenswrapper[4958]: I1206 07:21:15.979446 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-pvdb2_f44e552e-a8cb-4abf-bb5c-cfbde43b518b/frr/0.log" Dec 06 07:21:16 crc kubenswrapper[4958]: I1206 07:21:16.022171 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4ksm2_b3e2fd62-29f5-4627-91c0-581a8645568b/speaker/0.log" Dec 06 07:21:22 crc kubenswrapper[4958]: I1206 07:21:22.763280 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:21:22 crc kubenswrapper[4958]: E1206 07:21:22.764435 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:21:29 crc kubenswrapper[4958]: I1206 07:21:29.313618 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/util/0.log" Dec 06 07:21:29 crc kubenswrapper[4958]: I1206 07:21:29.499352 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/util/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.587678 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/pull/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.588482 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/pull/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.588501 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/pull/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.744054 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/util/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.758329 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fvxf99_415f7755-d8b2-4eea-a307-03e0b7ca4d95/extract/0.log" Dec 06 07:21:30 crc kubenswrapper[4958]: I1206 07:21:30.849122 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/util/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.000446 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/util/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.088873 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/pull/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.101903 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/pull/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.475464 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/pull/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.484136 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/util/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.548383 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83c2kjt_0fba650c-0f74-49d3-baa8-56f79c241413/extract/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.736330 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-utilities/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.897635 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-content/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.915564 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-content/0.log" Dec 06 07:21:31 crc kubenswrapper[4958]: I1206 07:21:31.961966 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-utilities/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.102268 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-utilities/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.188988 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/extract-content/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.322014 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-utilities/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.593798 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-content/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.646515 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-content/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.673953 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-utilities/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.913799 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-utilities/0.log" Dec 06 07:21:32 crc kubenswrapper[4958]: I1206 07:21:32.918004 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/extract-content/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.209063 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hjnl5_fa140104-7b6d-4d2b-a5b8-fba1696d1a94/marketplace-operator/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.376712 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-utilities/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.625967 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-fcnxn_3d0b5996-766b-4ced-9981-d56da88885bc/registry-server/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.704449 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-content/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.751986 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-content/0.log" Dec 06 07:21:33 crc kubenswrapper[4958]: I1206 07:21:33.752037 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-utilities/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.089731 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-content/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.093575 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/extract-utilities/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.118169 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-9s9jv_1865345c-50a3-47fe-90b6-ee8e165c2391/registry-server/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.307439 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-6d5bb_c879cca7-40bf-4150-bed9-df7dabb7e037/registry-server/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.366943 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-utilities/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.574577 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-utilities/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.580507 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-content/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.597833 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-content/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.721140 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-utilities/0.log" Dec 06 07:21:34 crc kubenswrapper[4958]: I1206 07:21:34.770131 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/extract-content/0.log" Dec 06 07:21:35 crc kubenswrapper[4958]: I1206 07:21:35.619725 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-9hcvw_e580e362-f544-46ac-9b5f-0bb097d87a41/registry-server/0.log" Dec 06 07:21:37 crc kubenswrapper[4958]: I1206 07:21:37.761764 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:21:37 crc kubenswrapper[4958]: E1206 07:21:37.762390 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:21:48 crc kubenswrapper[4958]: I1206 07:21:48.618662 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-668cf9dfbb-5kc79_102c7aef-7a9f-4838-817d-a410a9e1cea1/prometheus-operator/0.log" Dec 06 07:21:48 crc kubenswrapper[4958]: I1206 07:21:48.762295 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:21:48 crc kubenswrapper[4958]: E1206 07:21:48.762630 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:21:48 crc kubenswrapper[4958]: I1206 07:21:48.818711 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fbb8c6476-6b4st_9bdcc25c-5699-422e-9d03-b00a80ec8efa/prometheus-operator-admission-webhook/0.log" Dec 06 07:21:48 crc kubenswrapper[4958]: I1206 07:21:48.900729 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6fbb8c6476-hlzpg_12946b50-3866-4458-a26a-23987fdc0c1c/prometheus-operator-admission-webhook/0.log" Dec 06 07:21:49 crc kubenswrapper[4958]: I1206 07:21:49.055717 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-d8bb48f5d-g69b7_c280fe1d-3450-44ce-91c6-690601d34e98/operator/0.log" Dec 06 07:21:49 crc kubenswrapper[4958]: I1206 07:21:49.167793 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5446b9c989-qgdwb_a5825ccb-1463-4fc4-87c9-504ef6195da6/perses-operator/0.log" Dec 06 07:22:01 crc kubenswrapper[4958]: I1206 07:22:01.762716 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:22:01 crc kubenswrapper[4958]: E1206 07:22:01.763655 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:22:13 crc kubenswrapper[4958]: I1206 07:22:13.761920 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:22:13 crc kubenswrapper[4958]: E1206 07:22:13.764148 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:22:27 crc kubenswrapper[4958]: I1206 07:22:27.765212 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:22:27 crc kubenswrapper[4958]: E1206 07:22:27.766852 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:22:34 crc kubenswrapper[4958]: I1206 07:22:34.040676 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-55c85496f5nnvrk" podUID="c6c9cc05-d00f-4f92-bd7e-13737952085b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 06 07:22:42 crc kubenswrapper[4958]: I1206 07:22:42.762578 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:22:42 crc kubenswrapper[4958]: E1206 07:22:42.763462 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:22:55 crc kubenswrapper[4958]: I1206 07:22:55.768383 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:22:55 crc kubenswrapper[4958]: E1206 07:22:55.769300 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:23:06 crc kubenswrapper[4958]: I1206 07:23:06.762302 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:23:06 crc kubenswrapper[4958]: E1206 07:23:06.763222 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:23:21 crc kubenswrapper[4958]: I1206 07:23:21.766713 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:23:21 crc kubenswrapper[4958]: E1206 07:23:21.767644 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.727122 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:23:25 crc kubenswrapper[4958]: E1206 07:23:25.728230 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc62934b-9b31-44b1-b9fb-d6b9be412ec9" containerName="container-00" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.728248 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc62934b-9b31-44b1-b9fb-d6b9be412ec9" containerName="container-00" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.728488 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc62934b-9b31-44b1-b9fb-d6b9be412ec9" containerName="container-00" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.730229 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.758528 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.811798 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.811978 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.812009 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktjzq\" (UniqueName: \"kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.913918 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.913996 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.914018 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktjzq\" (UniqueName: \"kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.916157 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.918827 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.932228 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.934869 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.940340 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:25 crc kubenswrapper[4958]: I1206 07:23:25.975405 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktjzq\" (UniqueName: \"kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq\") pod \"redhat-operators-d4dnr\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.016308 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.016373 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.016643 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsr6k\" (UniqueName: \"kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.070446 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.118957 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.119011 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.119057 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsr6k\" (UniqueName: \"kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.119609 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.119837 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.137713 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsr6k\" (UniqueName: \"kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k\") pod \"community-operators-pfbzg\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.258648 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.748893 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:23:26 crc kubenswrapper[4958]: W1206 07:23:26.753018 4958 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56342cdd_6cb6_4916_aa5d_0c31d56877c1.slice/crio-c288b48794161f9bef0f3ae0b1f4000e424667a52f8aad80d5c5e2cc1a7d8c20 WatchSource:0}: Error finding container c288b48794161f9bef0f3ae0b1f4000e424667a52f8aad80d5c5e2cc1a7d8c20: Status 404 returned error can't find the container with id c288b48794161f9bef0f3ae0b1f4000e424667a52f8aad80d5c5e2cc1a7d8c20 Dec 06 07:23:26 crc kubenswrapper[4958]: I1206 07:23:26.940858 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.309648 4958 generic.go:334] "Generic (PLEG): container finished" podID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerID="02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279" exitCode=0 Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.309754 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerDied","Data":"02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279"} Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.309799 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerStarted","Data":"b05eb6dd50c99baebf5a57c6ed52c6d53ed1acd72f58cd567288aea94413497b"} Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.311676 4958 generic.go:334] "Generic (PLEG): container finished" podID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerID="ab38ae76effd953a0444284aa2b22775b756f387236189fa7fd487aaf5627bb7" exitCode=0 Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.311805 4958 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.311718 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerDied","Data":"ab38ae76effd953a0444284aa2b22775b756f387236189fa7fd487aaf5627bb7"} Dec 06 07:23:27 crc kubenswrapper[4958]: I1206 07:23:27.312078 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerStarted","Data":"c288b48794161f9bef0f3ae0b1f4000e424667a52f8aad80d5c5e2cc1a7d8c20"} Dec 06 07:23:30 crc kubenswrapper[4958]: I1206 07:23:30.345789 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerStarted","Data":"dfa3725c8908bd0438263b4e853eb36a997641a198b7fb96cca4483bd1e3f2a6"} Dec 06 07:23:30 crc kubenswrapper[4958]: I1206 07:23:30.349546 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerStarted","Data":"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c"} Dec 06 07:23:31 crc kubenswrapper[4958]: I1206 07:23:31.359718 4958 generic.go:334] "Generic (PLEG): container finished" podID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerID="dfa3725c8908bd0438263b4e853eb36a997641a198b7fb96cca4483bd1e3f2a6" exitCode=0 Dec 06 07:23:31 crc kubenswrapper[4958]: I1206 07:23:31.359746 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerDied","Data":"dfa3725c8908bd0438263b4e853eb36a997641a198b7fb96cca4483bd1e3f2a6"} Dec 06 07:23:31 crc kubenswrapper[4958]: I1206 07:23:31.363464 4958 generic.go:334] "Generic (PLEG): container finished" podID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerID="63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c" exitCode=0 Dec 06 07:23:31 crc kubenswrapper[4958]: I1206 07:23:31.363506 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerDied","Data":"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c"} Dec 06 07:23:36 crc kubenswrapper[4958]: I1206 07:23:36.762323 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:23:36 crc kubenswrapper[4958]: E1206 07:23:36.763096 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:23:39 crc kubenswrapper[4958]: I1206 07:23:39.450112 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerStarted","Data":"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8"} Dec 06 07:23:41 crc kubenswrapper[4958]: I1206 07:23:41.478733 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerStarted","Data":"54d8bf1c97c96838a09a9de0d6482cbcd5b5f322bb13d6ddbe7fa49968a129cf"} Dec 06 07:23:41 crc kubenswrapper[4958]: I1206 07:23:41.531504 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pfbzg" podStartSLOduration=9.381671395 podStartE2EDuration="16.531448535s" podCreationTimestamp="2025-12-06 07:23:25 +0000 UTC" firstStartedPulling="2025-12-06 07:23:27.311514475 +0000 UTC m=+6917.845285238" lastFinishedPulling="2025-12-06 07:23:34.461291615 +0000 UTC m=+6924.995062378" observedRunningTime="2025-12-06 07:23:41.501372044 +0000 UTC m=+6932.035142817" watchObservedRunningTime="2025-12-06 07:23:41.531448535 +0000 UTC m=+6932.065219298" Dec 06 07:23:42 crc kubenswrapper[4958]: I1206 07:23:42.541336 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d4dnr" podStartSLOduration=10.249797305 podStartE2EDuration="17.541315731s" podCreationTimestamp="2025-12-06 07:23:25 +0000 UTC" firstStartedPulling="2025-12-06 07:23:27.313224501 +0000 UTC m=+6917.846995254" lastFinishedPulling="2025-12-06 07:23:34.604742917 +0000 UTC m=+6925.138513680" observedRunningTime="2025-12-06 07:23:42.536105341 +0000 UTC m=+6933.069876104" watchObservedRunningTime="2025-12-06 07:23:42.541315731 +0000 UTC m=+6933.075086494" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.072628 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.072977 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.258787 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.258846 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.305677 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.588191 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:46 crc kubenswrapper[4958]: I1206 07:23:46.737253 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:47 crc kubenswrapper[4958]: I1206 07:23:47.126670 4958 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d4dnr" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="registry-server" probeResult="failure" output=< Dec 06 07:23:47 crc kubenswrapper[4958]: timeout: failed to connect service ":50051" within 1s Dec 06 07:23:47 crc kubenswrapper[4958]: > Dec 06 07:23:48 crc kubenswrapper[4958]: I1206 07:23:48.542661 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pfbzg" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="registry-server" containerID="cri-o://cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8" gracePeriod=2 Dec 06 07:23:49 crc kubenswrapper[4958]: I1206 07:23:49.772218 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:23:49 crc kubenswrapper[4958]: E1206 07:23:49.772893 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.492318 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.562708 4958 generic.go:334] "Generic (PLEG): container finished" podID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerID="cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8" exitCode=0 Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.562750 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerDied","Data":"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8"} Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.562778 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pfbzg" event={"ID":"c2958cee-aa6d-4e23-950f-800e7b50f742","Type":"ContainerDied","Data":"b05eb6dd50c99baebf5a57c6ed52c6d53ed1acd72f58cd567288aea94413497b"} Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.562794 4958 scope.go:117] "RemoveContainer" containerID="cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.562921 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pfbzg" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.586768 4958 scope.go:117] "RemoveContainer" containerID="63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.614298 4958 scope.go:117] "RemoveContainer" containerID="02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.634982 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content\") pod \"c2958cee-aa6d-4e23-950f-800e7b50f742\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.635188 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities\") pod \"c2958cee-aa6d-4e23-950f-800e7b50f742\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.635315 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsr6k\" (UniqueName: \"kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k\") pod \"c2958cee-aa6d-4e23-950f-800e7b50f742\" (UID: \"c2958cee-aa6d-4e23-950f-800e7b50f742\") " Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.637923 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities" (OuterVolumeSpecName: "utilities") pod "c2958cee-aa6d-4e23-950f-800e7b50f742" (UID: "c2958cee-aa6d-4e23-950f-800e7b50f742"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.642735 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k" (OuterVolumeSpecName: "kube-api-access-wsr6k") pod "c2958cee-aa6d-4e23-950f-800e7b50f742" (UID: "c2958cee-aa6d-4e23-950f-800e7b50f742"). InnerVolumeSpecName "kube-api-access-wsr6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.686207 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2958cee-aa6d-4e23-950f-800e7b50f742" (UID: "c2958cee-aa6d-4e23-950f-800e7b50f742"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.719095 4958 scope.go:117] "RemoveContainer" containerID="cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8" Dec 06 07:23:50 crc kubenswrapper[4958]: E1206 07:23:50.719981 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8\": container with ID starting with cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8 not found: ID does not exist" containerID="cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.720033 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8"} err="failed to get container status \"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8\": rpc error: code = NotFound desc = could not find container \"cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8\": container with ID starting with cebbcc29a3ca0ab14cd04d7a3f1ac192b2982e06064b46c5e1a4b3f3bcf8bfb8 not found: ID does not exist" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.720118 4958 scope.go:117] "RemoveContainer" containerID="63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c" Dec 06 07:23:50 crc kubenswrapper[4958]: E1206 07:23:50.720454 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c\": container with ID starting with 63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c not found: ID does not exist" containerID="63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.720506 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c"} err="failed to get container status \"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c\": rpc error: code = NotFound desc = could not find container \"63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c\": container with ID starting with 63fe3ebd0ab0ec1ee353e42e66bc8a8db396c9f540c098d7594facd650e55e4c not found: ID does not exist" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.720526 4958 scope.go:117] "RemoveContainer" containerID="02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279" Dec 06 07:23:50 crc kubenswrapper[4958]: E1206 07:23:50.720786 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279\": container with ID starting with 02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279 not found: ID does not exist" containerID="02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.720811 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279"} err="failed to get container status \"02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279\": rpc error: code = NotFound desc = could not find container \"02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279\": container with ID starting with 02e8afbcc53e2695bea6099de198737cf09c74c0843083bf4be517f54e84a279 not found: ID does not exist" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.737780 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.737810 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsr6k\" (UniqueName: \"kubernetes.io/projected/c2958cee-aa6d-4e23-950f-800e7b50f742-kube-api-access-wsr6k\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.737820 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2958cee-aa6d-4e23-950f-800e7b50f742-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.895701 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:50 crc kubenswrapper[4958]: I1206 07:23:50.905623 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pfbzg"] Dec 06 07:23:51 crc kubenswrapper[4958]: I1206 07:23:51.775837 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" path="/var/lib/kubelet/pods/c2958cee-aa6d-4e23-950f-800e7b50f742/volumes" Dec 06 07:23:56 crc kubenswrapper[4958]: I1206 07:23:56.124434 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:56 crc kubenswrapper[4958]: I1206 07:23:56.178852 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:23:56 crc kubenswrapper[4958]: I1206 07:23:56.926182 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:23:57 crc kubenswrapper[4958]: I1206 07:23:57.634264 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d4dnr" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="registry-server" containerID="cri-o://54d8bf1c97c96838a09a9de0d6482cbcd5b5f322bb13d6ddbe7fa49968a129cf" gracePeriod=2 Dec 06 07:23:59 crc kubenswrapper[4958]: I1206 07:23:59.695605 4958 generic.go:334] "Generic (PLEG): container finished" podID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerID="54d8bf1c97c96838a09a9de0d6482cbcd5b5f322bb13d6ddbe7fa49968a129cf" exitCode=0 Dec 06 07:23:59 crc kubenswrapper[4958]: I1206 07:23:59.695678 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerDied","Data":"54d8bf1c97c96838a09a9de0d6482cbcd5b5f322bb13d6ddbe7fa49968a129cf"} Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.075091 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.238049 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktjzq\" (UniqueName: \"kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq\") pod \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.238237 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities\") pod \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.238343 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content\") pod \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\" (UID: \"56342cdd-6cb6-4916-aa5d-0c31d56877c1\") " Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.238992 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities" (OuterVolumeSpecName: "utilities") pod "56342cdd-6cb6-4916-aa5d-0c31d56877c1" (UID: "56342cdd-6cb6-4916-aa5d-0c31d56877c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.245285 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq" (OuterVolumeSpecName: "kube-api-access-ktjzq") pod "56342cdd-6cb6-4916-aa5d-0c31d56877c1" (UID: "56342cdd-6cb6-4916-aa5d-0c31d56877c1"). InnerVolumeSpecName "kube-api-access-ktjzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.340783 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.341041 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktjzq\" (UniqueName: \"kubernetes.io/projected/56342cdd-6cb6-4916-aa5d-0c31d56877c1-kube-api-access-ktjzq\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.371636 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56342cdd-6cb6-4916-aa5d-0c31d56877c1" (UID: "56342cdd-6cb6-4916-aa5d-0c31d56877c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.442577 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56342cdd-6cb6-4916-aa5d-0c31d56877c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.708761 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d4dnr" event={"ID":"56342cdd-6cb6-4916-aa5d-0c31d56877c1","Type":"ContainerDied","Data":"c288b48794161f9bef0f3ae0b1f4000e424667a52f8aad80d5c5e2cc1a7d8c20"} Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.708801 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d4dnr" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.708822 4958 scope.go:117] "RemoveContainer" containerID="54d8bf1c97c96838a09a9de0d6482cbcd5b5f322bb13d6ddbe7fa49968a129cf" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.743526 4958 scope.go:117] "RemoveContainer" containerID="dfa3725c8908bd0438263b4e853eb36a997641a198b7fb96cca4483bd1e3f2a6" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.763624 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:24:00 crc kubenswrapper[4958]: E1206 07:24:00.763964 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.772053 4958 scope.go:117] "RemoveContainer" containerID="ab38ae76effd953a0444284aa2b22775b756f387236189fa7fd487aaf5627bb7" Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.774294 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:24:00 crc kubenswrapper[4958]: I1206 07:24:00.783987 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d4dnr"] Dec 06 07:24:01 crc kubenswrapper[4958]: I1206 07:24:01.776045 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" path="/var/lib/kubelet/pods/56342cdd-6cb6-4916-aa5d-0c31d56877c1/volumes" Dec 06 07:24:03 crc kubenswrapper[4958]: I1206 07:24:03.740851 4958 generic.go:334] "Generic (PLEG): container finished" podID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerID="291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439" exitCode=0 Dec 06 07:24:03 crc kubenswrapper[4958]: I1206 07:24:03.741034 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" event={"ID":"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5","Type":"ContainerDied","Data":"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439"} Dec 06 07:24:03 crc kubenswrapper[4958]: I1206 07:24:03.741929 4958 scope.go:117] "RemoveContainer" containerID="291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439" Dec 06 07:24:04 crc kubenswrapper[4958]: I1206 07:24:04.005883 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr6bn_must-gather-z5vxm_cf286e78-7d9f-46c2-89ce-fec5c93c2eb5/gather/0.log" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.122276 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kr6bn/must-gather-z5vxm"] Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.122945 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="copy" containerID="cri-o://12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac" gracePeriod=2 Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.135957 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kr6bn/must-gather-z5vxm"] Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.682155 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr6bn_must-gather-z5vxm_cf286e78-7d9f-46c2-89ce-fec5c93c2eb5/copy/0.log" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.682955 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.803307 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output\") pod \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.803668 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkthk\" (UniqueName: \"kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk\") pod \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\" (UID: \"cf286e78-7d9f-46c2-89ce-fec5c93c2eb5\") " Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.829760 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk" (OuterVolumeSpecName: "kube-api-access-pkthk") pod "cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" (UID: "cf286e78-7d9f-46c2-89ce-fec5c93c2eb5"). InnerVolumeSpecName "kube-api-access-pkthk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.842934 4958 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kr6bn_must-gather-z5vxm_cf286e78-7d9f-46c2-89ce-fec5c93c2eb5/copy/0.log" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.844390 4958 generic.go:334] "Generic (PLEG): container finished" podID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerID="12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac" exitCode=143 Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.844448 4958 scope.go:117] "RemoveContainer" containerID="12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.844799 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kr6bn/must-gather-z5vxm" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.888151 4958 scope.go:117] "RemoveContainer" containerID="291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.911219 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkthk\" (UniqueName: \"kubernetes.io/projected/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-kube-api-access-pkthk\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.997069 4958 scope.go:117] "RemoveContainer" containerID="12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac" Dec 06 07:24:13 crc kubenswrapper[4958]: E1206 07:24:13.998398 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac\": container with ID starting with 12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac not found: ID does not exist" containerID="12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.998450 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac"} err="failed to get container status \"12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac\": rpc error: code = NotFound desc = could not find container \"12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac\": container with ID starting with 12350da832a89c70fe4c8672766fe02e4bbace30cbd70a8776c5ef20cd64e1ac not found: ID does not exist" Dec 06 07:24:13 crc kubenswrapper[4958]: I1206 07:24:13.998496 4958 scope.go:117] "RemoveContainer" containerID="291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439" Dec 06 07:24:14 crc kubenswrapper[4958]: E1206 07:24:14.000303 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439\": container with ID starting with 291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439 not found: ID does not exist" containerID="291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439" Dec 06 07:24:14 crc kubenswrapper[4958]: I1206 07:24:14.000380 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439"} err="failed to get container status \"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439\": rpc error: code = NotFound desc = could not find container \"291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439\": container with ID starting with 291e5c41909e2a44311b94a4d4accee098ec04f5b9434857aadc5d44f325a439 not found: ID does not exist" Dec 06 07:24:14 crc kubenswrapper[4958]: I1206 07:24:14.013627 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" (UID: "cf286e78-7d9f-46c2-89ce-fec5c93c2eb5"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:24:14 crc kubenswrapper[4958]: I1206 07:24:14.114879 4958 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 06 07:24:14 crc kubenswrapper[4958]: I1206 07:24:14.762449 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:24:14 crc kubenswrapper[4958]: E1206 07:24:14.762729 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:24:15 crc kubenswrapper[4958]: I1206 07:24:15.779060 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" path="/var/lib/kubelet/pods/cf286e78-7d9f-46c2-89ce-fec5c93c2eb5/volumes" Dec 06 07:24:26 crc kubenswrapper[4958]: I1206 07:24:26.762144 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:24:26 crc kubenswrapper[4958]: E1206 07:24:26.763039 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:24:34 crc kubenswrapper[4958]: I1206 07:24:34.105805 4958 scope.go:117] "RemoveContainer" containerID="a46f2180a7867ba6a7c3e3584e09fc53cc47f8379d21a3a907e6424604eb9113" Dec 06 07:24:41 crc kubenswrapper[4958]: I1206 07:24:41.762326 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:24:41 crc kubenswrapper[4958]: E1206 07:24:41.763269 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:24:53 crc kubenswrapper[4958]: I1206 07:24:53.761701 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:24:53 crc kubenswrapper[4958]: E1206 07:24:53.762815 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:25:08 crc kubenswrapper[4958]: I1206 07:25:08.762576 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:25:08 crc kubenswrapper[4958]: E1206 07:25:08.763502 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:25:19 crc kubenswrapper[4958]: I1206 07:25:19.769294 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:25:19 crc kubenswrapper[4958]: E1206 07:25:19.770270 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:25:33 crc kubenswrapper[4958]: I1206 07:25:33.762586 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:25:33 crc kubenswrapper[4958]: E1206 07:25:33.763568 4958 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5ktnh_openshift-machine-config-operator(c13528c0-da5d-4d55-9155-2c29c33edfc4)\"" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" Dec 06 07:25:34 crc kubenswrapper[4958]: I1206 07:25:34.211592 4958 scope.go:117] "RemoveContainer" containerID="8e9e34a259435d40df73e3f98fec2366a78edbfea06a72f823a3614e4a4cac93" Dec 06 07:25:47 crc kubenswrapper[4958]: I1206 07:25:47.762740 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:25:48 crc kubenswrapper[4958]: I1206 07:25:48.081280 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"9cad79e2fcfd7dbf53fe489096eac99632c390a557753e6791ad296f07e57569"} Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.495307 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496248 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="extract-content" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496264 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="extract-content" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496283 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="extract-utilities" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496289 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="extract-utilities" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496298 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="copy" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496304 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="copy" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496326 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496337 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496355 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="extract-content" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496363 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="extract-content" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496382 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496390 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496408 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="extract-utilities" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496415 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="extract-utilities" Dec 06 07:27:08 crc kubenswrapper[4958]: E1206 07:27:08.496429 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="gather" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.496435 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="gather" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.502865 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="copy" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.502912 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2958cee-aa6d-4e23-950f-800e7b50f742" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.502928 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="56342cdd-6cb6-4916-aa5d-0c31d56877c1" containerName="registry-server" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.502970 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf286e78-7d9f-46c2-89ce-fec5c93c2eb5" containerName="gather" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.504682 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.511985 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.634880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.635397 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.635441 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgxwj\" (UniqueName: \"kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.737405 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.737490 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgxwj\" (UniqueName: \"kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.737604 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.737986 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.738132 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.758775 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgxwj\" (UniqueName: \"kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj\") pod \"redhat-marketplace-9t9lp\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:08 crc kubenswrapper[4958]: I1206 07:27:08.833940 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:09 crc kubenswrapper[4958]: I1206 07:27:09.357740 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:09 crc kubenswrapper[4958]: I1206 07:27:09.874121 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerStarted","Data":"ffab5e0e55e5a532845e00db752ab8884cc7acf9004b1f8d46518d689b1f1625"} Dec 06 07:27:10 crc kubenswrapper[4958]: I1206 07:27:10.887929 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerID="39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c" exitCode=0 Dec 06 07:27:10 crc kubenswrapper[4958]: I1206 07:27:10.888028 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerDied","Data":"39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c"} Dec 06 07:27:11 crc kubenswrapper[4958]: I1206 07:27:11.901732 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerStarted","Data":"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda"} Dec 06 07:27:12 crc kubenswrapper[4958]: I1206 07:27:12.963607 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerID="e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda" exitCode=0 Dec 06 07:27:12 crc kubenswrapper[4958]: I1206 07:27:12.963742 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerDied","Data":"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda"} Dec 06 07:27:13 crc kubenswrapper[4958]: I1206 07:27:13.995168 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerStarted","Data":"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767"} Dec 06 07:27:14 crc kubenswrapper[4958]: I1206 07:27:14.024847 4958 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9t9lp" podStartSLOduration=3.562172411 podStartE2EDuration="6.024823176s" podCreationTimestamp="2025-12-06 07:27:08 +0000 UTC" firstStartedPulling="2025-12-06 07:27:10.891674405 +0000 UTC m=+7141.425445168" lastFinishedPulling="2025-12-06 07:27:13.35432517 +0000 UTC m=+7143.888095933" observedRunningTime="2025-12-06 07:27:14.017703694 +0000 UTC m=+7144.551474477" watchObservedRunningTime="2025-12-06 07:27:14.024823176 +0000 UTC m=+7144.558593939" Dec 06 07:27:18 crc kubenswrapper[4958]: I1206 07:27:18.834200 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:18 crc kubenswrapper[4958]: I1206 07:27:18.834679 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:18 crc kubenswrapper[4958]: I1206 07:27:18.891308 4958 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:19 crc kubenswrapper[4958]: I1206 07:27:19.088684 4958 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:19 crc kubenswrapper[4958]: I1206 07:27:19.149596 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.057789 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9t9lp" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="registry-server" containerID="cri-o://3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767" gracePeriod=2 Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.639573 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.802345 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgxwj\" (UniqueName: \"kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj\") pod \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.802889 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities\") pod \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.802931 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content\") pod \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\" (UID: \"6a29060d-5a10-4e69-afcd-4b3f279ad2d9\") " Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.804062 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities" (OuterVolumeSpecName: "utilities") pod "6a29060d-5a10-4e69-afcd-4b3f279ad2d9" (UID: "6a29060d-5a10-4e69-afcd-4b3f279ad2d9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.816132 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj" (OuterVolumeSpecName: "kube-api-access-pgxwj") pod "6a29060d-5a10-4e69-afcd-4b3f279ad2d9" (UID: "6a29060d-5a10-4e69-afcd-4b3f279ad2d9"). InnerVolumeSpecName "kube-api-access-pgxwj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.834193 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a29060d-5a10-4e69-afcd-4b3f279ad2d9" (UID: "6a29060d-5a10-4e69-afcd-4b3f279ad2d9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.907189 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgxwj\" (UniqueName: \"kubernetes.io/projected/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-kube-api-access-pgxwj\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.907274 4958 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-utilities\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:21 crc kubenswrapper[4958]: I1206 07:27:21.907296 4958 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a29060d-5a10-4e69-afcd-4b3f279ad2d9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.088504 4958 generic.go:334] "Generic (PLEG): container finished" podID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerID="3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767" exitCode=0 Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.088578 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerDied","Data":"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767"} Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.088618 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9t9lp" event={"ID":"6a29060d-5a10-4e69-afcd-4b3f279ad2d9","Type":"ContainerDied","Data":"ffab5e0e55e5a532845e00db752ab8884cc7acf9004b1f8d46518d689b1f1625"} Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.088659 4958 scope.go:117] "RemoveContainer" containerID="3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.088971 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9t9lp" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.118199 4958 scope.go:117] "RemoveContainer" containerID="e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.135106 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.151484 4958 scope.go:117] "RemoveContainer" containerID="39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.154609 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9t9lp"] Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.201120 4958 scope.go:117] "RemoveContainer" containerID="3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767" Dec 06 07:27:22 crc kubenswrapper[4958]: E1206 07:27:22.203460 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767\": container with ID starting with 3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767 not found: ID does not exist" containerID="3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.203550 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767"} err="failed to get container status \"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767\": rpc error: code = NotFound desc = could not find container \"3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767\": container with ID starting with 3c622bdcb9691dd85bb2ea0f25c0443e4296a4ccb7a3911ff34023b72184a767 not found: ID does not exist" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.203586 4958 scope.go:117] "RemoveContainer" containerID="e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda" Dec 06 07:27:22 crc kubenswrapper[4958]: E1206 07:27:22.204007 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda\": container with ID starting with e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda not found: ID does not exist" containerID="e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.204070 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda"} err="failed to get container status \"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda\": rpc error: code = NotFound desc = could not find container \"e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda\": container with ID starting with e45292eeeb0e8f60faaabc98ad14031305129174aaff107a8313f279eda63fda not found: ID does not exist" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.204105 4958 scope.go:117] "RemoveContainer" containerID="39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c" Dec 06 07:27:22 crc kubenswrapper[4958]: E1206 07:27:22.204638 4958 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c\": container with ID starting with 39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c not found: ID does not exist" containerID="39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c" Dec 06 07:27:22 crc kubenswrapper[4958]: I1206 07:27:22.204671 4958 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c"} err="failed to get container status \"39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c\": rpc error: code = NotFound desc = could not find container \"39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c\": container with ID starting with 39168494e4ee96c7d05eff0698092c37f86c71d41be8e4edfee7cbe1ab8ecc5c not found: ID does not exist" Dec 06 07:27:23 crc kubenswrapper[4958]: I1206 07:27:23.774705 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" path="/var/lib/kubelet/pods/6a29060d-5a10-4e69-afcd-4b3f279ad2d9/volumes" Dec 06 07:28:09 crc kubenswrapper[4958]: I1206 07:28:09.865759 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:28:09 crc kubenswrapper[4958]: I1206 07:28:09.866300 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:28:39 crc kubenswrapper[4958]: I1206 07:28:39.866775 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:28:39 crc kubenswrapper[4958]: I1206 07:28:39.867802 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:29:09 crc kubenswrapper[4958]: I1206 07:29:09.865773 4958 patch_prober.go:28] interesting pod/machine-config-daemon-5ktnh container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 06 07:29:09 crc kubenswrapper[4958]: I1206 07:29:09.866413 4958 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 06 07:29:09 crc kubenswrapper[4958]: I1206 07:29:09.866492 4958 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" Dec 06 07:29:09 crc kubenswrapper[4958]: I1206 07:29:09.867455 4958 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9cad79e2fcfd7dbf53fe489096eac99632c390a557753e6791ad296f07e57569"} pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 06 07:29:09 crc kubenswrapper[4958]: I1206 07:29:09.867641 4958 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" podUID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerName="machine-config-daemon" containerID="cri-o://9cad79e2fcfd7dbf53fe489096eac99632c390a557753e6791ad296f07e57569" gracePeriod=600 Dec 06 07:29:10 crc kubenswrapper[4958]: I1206 07:29:10.741835 4958 generic.go:334] "Generic (PLEG): container finished" podID="c13528c0-da5d-4d55-9155-2c29c33edfc4" containerID="9cad79e2fcfd7dbf53fe489096eac99632c390a557753e6791ad296f07e57569" exitCode=0 Dec 06 07:29:10 crc kubenswrapper[4958]: I1206 07:29:10.741921 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerDied","Data":"9cad79e2fcfd7dbf53fe489096eac99632c390a557753e6791ad296f07e57569"} Dec 06 07:29:10 crc kubenswrapper[4958]: I1206 07:29:10.741978 4958 scope.go:117] "RemoveContainer" containerID="ee2d7350ad52a52898da3b0522d607a28c91abbc378871215df544c4e674181f" Dec 06 07:29:15 crc kubenswrapper[4958]: I1206 07:29:15.799208 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5ktnh" event={"ID":"c13528c0-da5d-4d55-9155-2c29c33edfc4","Type":"ContainerStarted","Data":"e1827a501ce2c01181dcece15474b3e303d845fd31e34f23320c4cff905d0c0c"} Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.167809 4958 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj"] Dec 06 07:30:00 crc kubenswrapper[4958]: E1206 07:30:00.169053 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.169072 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="extract-utilities" Dec 06 07:30:00 crc kubenswrapper[4958]: E1206 07:30:00.169119 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.169128 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4958]: E1206 07:30:00.169144 4958 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.169152 4958 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="extract-content" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.169396 4958 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a29060d-5a10-4e69-afcd-4b3f279ad2d9" containerName="registry-server" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.170550 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.174073 4958 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.174208 4958 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.194261 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj"] Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.302740 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.302880 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.302905 4958 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtxh\" (UniqueName: \"kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.404636 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.404707 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.404727 4958 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtxh\" (UniqueName: \"kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.405737 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.416783 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.428461 4958 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtxh\" (UniqueName: \"kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh\") pod \"collect-profiles-29416770-lc2zj\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:00 crc kubenswrapper[4958]: I1206 07:30:00.505818 4958 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:01 crc kubenswrapper[4958]: I1206 07:30:01.002093 4958 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj"] Dec 06 07:30:01 crc kubenswrapper[4958]: I1206 07:30:01.233103 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" event={"ID":"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9","Type":"ContainerStarted","Data":"86a729c4ec2780cdd32c66df2506141f06bb7f771937429a47aafd3598184282"} Dec 06 07:30:01 crc kubenswrapper[4958]: I1206 07:30:01.233162 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" event={"ID":"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9","Type":"ContainerStarted","Data":"b4a0fa78e8f318bcfd079c2155c51f4e662b33869fa633b68ec9e554439b05ad"} Dec 06 07:30:02 crc kubenswrapper[4958]: I1206 07:30:02.245131 4958 generic.go:334] "Generic (PLEG): container finished" podID="9ce7dd01-21f0-4a71-85ad-ba57dd87cac9" containerID="86a729c4ec2780cdd32c66df2506141f06bb7f771937429a47aafd3598184282" exitCode=0 Dec 06 07:30:02 crc kubenswrapper[4958]: I1206 07:30:02.245206 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" event={"ID":"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9","Type":"ContainerDied","Data":"86a729c4ec2780cdd32c66df2506141f06bb7f771937429a47aafd3598184282"} Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.586213 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.687666 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume\") pod \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.687775 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume\") pod \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.687857 4958 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgtxh\" (UniqueName: \"kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh\") pod \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\" (UID: \"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9\") " Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.688694 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume" (OuterVolumeSpecName: "config-volume") pod "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9" (UID: "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.695419 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9" (UID: "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.695517 4958 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh" (OuterVolumeSpecName: "kube-api-access-zgtxh") pod "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9" (UID: "9ce7dd01-21f0-4a71-85ad-ba57dd87cac9"). InnerVolumeSpecName "kube-api-access-zgtxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.790953 4958 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.790993 4958 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:03 crc kubenswrapper[4958]: I1206 07:30:03.791008 4958 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgtxh\" (UniqueName: \"kubernetes.io/projected/9ce7dd01-21f0-4a71-85ad-ba57dd87cac9-kube-api-access-zgtxh\") on node \"crc\" DevicePath \"\"" Dec 06 07:30:04 crc kubenswrapper[4958]: I1206 07:30:04.269295 4958 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" event={"ID":"9ce7dd01-21f0-4a71-85ad-ba57dd87cac9","Type":"ContainerDied","Data":"b4a0fa78e8f318bcfd079c2155c51f4e662b33869fa633b68ec9e554439b05ad"} Dec 06 07:30:04 crc kubenswrapper[4958]: I1206 07:30:04.269654 4958 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a0fa78e8f318bcfd079c2155c51f4e662b33869fa633b68ec9e554439b05ad" Dec 06 07:30:04 crc kubenswrapper[4958]: I1206 07:30:04.269426 4958 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29416770-lc2zj" Dec 06 07:30:04 crc kubenswrapper[4958]: I1206 07:30:04.673664 4958 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7"] Dec 06 07:30:04 crc kubenswrapper[4958]: I1206 07:30:04.683281 4958 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29416725-mxfd7"] Dec 06 07:30:05 crc kubenswrapper[4958]: I1206 07:30:05.776677 4958 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="254444a5-8159-441c-922e-7a6751e0f1d1" path="/var/lib/kubelet/pods/254444a5-8159-441c-922e-7a6751e0f1d1/volumes"